00:00:00.001 Started by upstream project "autotest-nightly" build number 3884 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.159 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.159 The recommended git tool is: git 00:00:00.159 using credential 00000000-0000-0000-0000-000000000002 00:00:00.161 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.189 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.420 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.430 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.440 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.440 > git config core.sparsecheckout # timeout=10 00:00:08.451 > git read-tree -mu HEAD # timeout=10 00:00:08.467 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.487 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.488 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.583 [Pipeline] Start of Pipeline 00:00:08.595 [Pipeline] library 00:00:08.596 Loading library shm_lib@master 00:00:08.597 Library shm_lib@master is cached. Copying from home. 00:00:08.609 [Pipeline] node 00:00:08.618 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.619 [Pipeline] { 00:00:08.628 [Pipeline] catchError 00:00:08.629 [Pipeline] { 00:00:08.640 [Pipeline] wrap 00:00:08.648 [Pipeline] { 00:00:08.655 [Pipeline] stage 00:00:08.657 [Pipeline] { (Prologue) 00:00:08.822 [Pipeline] sh 00:00:09.102 + logger -p user.info -t JENKINS-CI 00:00:09.122 [Pipeline] echo 00:00:09.123 Node: GP11 00:00:09.131 [Pipeline] sh 00:00:09.429 [Pipeline] setCustomBuildProperty 00:00:09.444 [Pipeline] echo 00:00:09.445 Cleanup processes 00:00:09.449 [Pipeline] sh 00:00:09.773 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.773 460768 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.785 [Pipeline] sh 00:00:10.065 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.065 ++ grep -v 'sudo pgrep' 00:00:10.065 ++ awk '{print $1}' 00:00:10.065 + sudo kill -9 00:00:10.065 + true 00:00:10.078 [Pipeline] cleanWs 00:00:10.087 [WS-CLEANUP] Deleting project workspace... 00:00:10.087 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.093 [WS-CLEANUP] done 00:00:10.097 [Pipeline] setCustomBuildProperty 00:00:10.113 [Pipeline] sh 00:00:10.394 + sudo git config --global --replace-all safe.directory '*' 00:00:10.488 [Pipeline] httpRequest 00:00:10.511 [Pipeline] echo 00:00:10.512 Sorcerer 10.211.164.101 is alive 00:00:10.521 [Pipeline] httpRequest 00:00:10.525 HttpMethod: GET 00:00:10.525 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.526 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.528 Response Code: HTTP/1.1 200 OK 00:00:10.529 Success: Status code 200 is in the accepted range: 200,404 00:00:10.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.623 [Pipeline] sh 00:00:11.910 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.928 [Pipeline] httpRequest 00:00:11.963 [Pipeline] echo 00:00:11.965 Sorcerer 10.211.164.101 is alive 00:00:11.975 [Pipeline] httpRequest 00:00:11.980 HttpMethod: GET 00:00:11.980 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.981 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.999 Response Code: HTTP/1.1 200 OK 00:00:11.999 Success: Status code 200 is in the accepted range: 200,404 00:00:12.000 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:43.385 [Pipeline] sh 00:00:43.671 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:46.226 [Pipeline] sh 00:00:46.521 + git -C spdk log --oneline -n5 00:00:46.521 719d03c6a sock/uring: only register net impl if supported 00:00:46.521 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:46.521 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:46.521 6c7c1f57e accel: add sequence outstanding stat 00:00:46.521 3bc8e6a26 accel: add utility to put task 00:00:46.591 [Pipeline] } 00:00:46.606 [Pipeline] // stage 00:00:46.613 [Pipeline] stage 00:00:46.615 [Pipeline] { (Prepare) 00:00:46.630 [Pipeline] writeFile 00:00:46.644 [Pipeline] sh 00:00:46.923 + logger -p user.info -t JENKINS-CI 00:00:46.936 [Pipeline] sh 00:00:47.218 + logger -p user.info -t JENKINS-CI 00:00:47.230 [Pipeline] sh 00:00:47.512 + cat autorun-spdk.conf 00:00:47.513 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.513 SPDK_TEST_NVMF=1 00:00:47.513 SPDK_TEST_NVME_CLI=1 00:00:47.513 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.513 SPDK_TEST_NVMF_NICS=e810 00:00:47.513 SPDK_RUN_ASAN=1 00:00:47.513 SPDK_RUN_UBSAN=1 00:00:47.513 NET_TYPE=phy 00:00:47.520 RUN_NIGHTLY=1 00:00:47.525 [Pipeline] readFile 00:00:47.551 [Pipeline] withEnv 00:00:47.553 [Pipeline] { 00:00:47.567 [Pipeline] sh 00:00:47.851 + set -ex 00:00:47.851 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:47.851 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.851 ++ SPDK_TEST_NVMF=1 00:00:47.851 ++ SPDK_TEST_NVME_CLI=1 00:00:47.851 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.851 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.851 ++ SPDK_RUN_ASAN=1 00:00:47.851 ++ SPDK_RUN_UBSAN=1 00:00:47.851 ++ NET_TYPE=phy 00:00:47.851 ++ RUN_NIGHTLY=1 00:00:47.851 + case $SPDK_TEST_NVMF_NICS in 00:00:47.851 + DRIVERS=ice 00:00:47.851 + [[ tcp == \r\d\m\a ]] 00:00:47.851 + [[ -n ice ]] 00:00:47.851 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:47.851 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:47.851 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:47.851 rmmod: ERROR: Module irdma is not currently loaded 00:00:47.851 rmmod: ERROR: Module i40iw is not currently loaded 00:00:47.851 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:47.851 + true 00:00:47.851 + for D in $DRIVERS 00:00:47.851 + sudo modprobe ice 00:00:47.851 + exit 0 00:00:47.861 [Pipeline] } 00:00:47.878 [Pipeline] // withEnv 00:00:47.883 [Pipeline] } 00:00:47.899 [Pipeline] // stage 00:00:47.908 [Pipeline] catchError 00:00:47.910 [Pipeline] { 00:00:47.925 [Pipeline] timeout 00:00:47.925 Timeout set to expire in 50 min 00:00:47.927 [Pipeline] { 00:00:47.942 [Pipeline] stage 00:00:47.943 [Pipeline] { (Tests) 00:00:47.959 [Pipeline] sh 00:00:48.243 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.243 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.243 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.243 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:48.243 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.243 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:48.243 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:48.243 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:48.243 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:48.243 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:48.243 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:48.243 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.243 + source /etc/os-release 00:00:48.243 ++ NAME='Fedora Linux' 00:00:48.243 ++ VERSION='38 (Cloud Edition)' 00:00:48.243 ++ ID=fedora 00:00:48.243 ++ VERSION_ID=38 00:00:48.243 ++ VERSION_CODENAME= 00:00:48.243 ++ PLATFORM_ID=platform:f38 00:00:48.243 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:48.243 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:48.243 ++ LOGO=fedora-logo-icon 00:00:48.243 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:48.243 ++ HOME_URL=https://fedoraproject.org/ 00:00:48.243 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:48.243 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:48.243 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:48.243 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:48.243 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:48.243 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:48.243 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:48.243 ++ SUPPORT_END=2024-05-14 00:00:48.243 ++ VARIANT='Cloud Edition' 00:00:48.243 ++ VARIANT_ID=cloud 00:00:48.243 + uname -a 00:00:48.243 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:48.243 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:49.180 Hugepages 00:00:49.180 node hugesize free / total 00:00:49.180 node0 1048576kB 0 / 0 00:00:49.180 node0 2048kB 0 / 0 00:00:49.180 node1 1048576kB 0 / 0 00:00:49.180 node1 2048kB 0 / 0 00:00:49.180 00:00:49.180 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:49.180 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:49.180 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:49.180 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:49.180 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:49.180 + rm -f /tmp/spdk-ld-path 00:00:49.180 + source autorun-spdk.conf 00:00:49.180 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.180 ++ SPDK_TEST_NVMF=1 00:00:49.180 ++ SPDK_TEST_NVME_CLI=1 00:00:49.180 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.180 ++ SPDK_TEST_NVMF_NICS=e810 00:00:49.180 ++ SPDK_RUN_ASAN=1 00:00:49.180 ++ SPDK_RUN_UBSAN=1 00:00:49.180 ++ NET_TYPE=phy 00:00:49.180 ++ RUN_NIGHTLY=1 00:00:49.180 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:49.180 + [[ -n '' ]] 00:00:49.180 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.180 + for M in /var/spdk/build-*-manifest.txt 00:00:49.180 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:49.180 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:49.180 + for M in /var/spdk/build-*-manifest.txt 00:00:49.180 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:49.180 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:49.180 ++ uname 00:00:49.180 + [[ Linux == \L\i\n\u\x ]] 00:00:49.180 + sudo dmesg -T 00:00:49.180 + sudo dmesg --clear 00:00:49.439 + dmesg_pid=461443 00:00:49.439 + [[ Fedora Linux == FreeBSD ]] 00:00:49.439 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.439 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.439 + sudo dmesg -Tw 00:00:49.439 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:49.439 + [[ -x /usr/src/fio-static/fio ]] 00:00:49.439 + export FIO_BIN=/usr/src/fio-static/fio 00:00:49.439 + FIO_BIN=/usr/src/fio-static/fio 00:00:49.439 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:49.439 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:49.439 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:49.439 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.439 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.439 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:49.439 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.439 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.439 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:49.439 Test configuration: 00:00:49.439 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.439 SPDK_TEST_NVMF=1 00:00:49.439 SPDK_TEST_NVME_CLI=1 00:00:49.439 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.439 SPDK_TEST_NVMF_NICS=e810 00:00:49.439 SPDK_RUN_ASAN=1 00:00:49.439 SPDK_RUN_UBSAN=1 00:00:49.439 NET_TYPE=phy 00:00:49.439 RUN_NIGHTLY=1 04:49:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:49.439 04:49:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:49.439 04:49:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:49.439 04:49:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:49.439 04:49:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.439 04:49:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.439 04:49:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.439 04:49:55 -- paths/export.sh@5 -- $ export PATH 00:00:49.439 04:49:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.439 04:49:55 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:49.439 04:49:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:49.439 04:49:55 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720838995.XXXXXX 00:00:49.439 04:49:55 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720838995.UeuKE3 00:00:49.439 04:49:55 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:49.439 04:49:55 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:49.439 04:49:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:49.439 04:49:55 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:49.439 04:49:55 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:49.439 04:49:55 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:49.439 04:49:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:49.439 04:49:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.439 04:49:55 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:49.439 04:49:55 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:49.439 04:49:55 -- pm/common@17 -- $ local monitor 00:00:49.439 04:49:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.439 04:49:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.439 04:49:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.439 04:49:55 -- pm/common@21 -- $ date +%s 00:00:49.439 04:49:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.439 04:49:55 -- pm/common@21 -- $ date +%s 00:00:49.439 04:49:55 -- pm/common@25 -- $ sleep 1 00:00:49.439 04:49:55 -- pm/common@21 -- $ date +%s 00:00:49.439 04:49:55 -- pm/common@21 -- $ date +%s 00:00:49.439 04:49:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720838995 00:00:49.439 04:49:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720838995 00:00:49.439 04:49:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720838995 00:00:49.439 04:49:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720838995 00:00:49.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720838995_collect-vmstat.pm.log 00:00:49.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720838995_collect-cpu-load.pm.log 00:00:49.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720838995_collect-cpu-temp.pm.log 00:00:49.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720838995_collect-bmc-pm.bmc.pm.log 00:00:50.375 04:49:56 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:50.375 04:49:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:50.375 04:49:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:50.375 04:49:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.375 04:49:56 -- spdk/autobuild.sh@16 -- $ date -u 00:00:50.375 Sat Jul 13 02:49:56 AM UTC 2024 00:00:50.375 04:49:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:50.375 v24.09-pre-202-g719d03c6a 00:00:50.375 04:49:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:50.375 04:49:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:50.375 04:49:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:50.375 04:49:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:50.375 04:49:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.375 ************************************ 00:00:50.375 START TEST asan 00:00:50.375 ************************************ 00:00:50.375 04:49:56 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:00:50.375 using asan 00:00:50.375 00:00:50.375 real 0m0.000s 00:00:50.375 user 0m0.000s 00:00:50.375 sys 0m0.000s 00:00:50.375 04:49:56 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:50.375 04:49:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:50.375 ************************************ 00:00:50.375 END TEST asan 00:00:50.375 ************************************ 00:00:50.375 04:49:56 -- common/autotest_common.sh@1142 -- $ return 0 00:00:50.375 04:49:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:50.375 04:49:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:50.375 04:49:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:50.375 04:49:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:50.375 04:49:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.375 ************************************ 00:00:50.375 START TEST ubsan 00:00:50.375 ************************************ 00:00:50.375 04:49:56 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:50.375 using ubsan 00:00:50.375 00:00:50.375 real 0m0.000s 00:00:50.375 user 0m0.000s 00:00:50.375 sys 0m0.000s 00:00:50.375 04:49:56 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:50.375 04:49:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:50.375 ************************************ 00:00:50.375 END TEST ubsan 00:00:50.375 ************************************ 00:00:50.375 04:49:56 -- common/autotest_common.sh@1142 -- $ return 0 00:00:50.375 04:49:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:50.375 04:49:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:50.375 04:49:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:50.375 04:49:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:50.375 04:49:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:50.375 04:49:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:50.634 04:49:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:50.634 04:49:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:50.634 04:49:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:50.634 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:50.634 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:50.892 Using 'verbs' RDMA provider 00:01:01.435 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:11.425 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:11.425 Creating mk/config.mk...done. 00:01:11.425 Creating mk/cc.flags.mk...done. 00:01:11.425 Type 'make' to build. 00:01:11.425 04:50:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:11.425 04:50:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:11.425 04:50:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:11.425 04:50:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.425 ************************************ 00:01:11.425 START TEST make 00:01:11.425 ************************************ 00:01:11.426 04:50:17 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:11.426 make[1]: Nothing to be done for 'all'. 00:01:19.567 The Meson build system 00:01:19.567 Version: 1.3.1 00:01:19.567 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:19.567 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:19.567 Build type: native build 00:01:19.567 Program cat found: YES (/usr/bin/cat) 00:01:19.567 Project name: DPDK 00:01:19.567 Project version: 24.03.0 00:01:19.567 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.567 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.567 Host machine cpu family: x86_64 00:01:19.567 Host machine cpu: x86_64 00:01:19.567 Message: ## Building in Developer Mode ## 00:01:19.567 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.567 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:19.567 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.567 Program python3 found: YES (/usr/bin/python3) 00:01:19.567 Program cat found: YES (/usr/bin/cat) 00:01:19.567 Compiler for C supports arguments -march=native: YES 00:01:19.567 Checking for size of "void *" : 8 00:01:19.567 Checking for size of "void *" : 8 (cached) 00:01:19.567 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:19.567 Library m found: YES 00:01:19.567 Library numa found: YES 00:01:19.567 Has header "numaif.h" : YES 00:01:19.567 Library fdt found: NO 00:01:19.567 Library execinfo found: NO 00:01:19.567 Has header "execinfo.h" : YES 00:01:19.567 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.567 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.567 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.567 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.567 Run-time dependency openssl found: YES 3.0.9 00:01:19.567 Run-time dependency libpcap found: YES 1.10.4 00:01:19.567 Has header "pcap.h" with dependency libpcap: YES 00:01:19.567 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.567 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.567 Compiler for C supports arguments -Wformat: YES 00:01:19.567 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.567 Compiler for C supports arguments -Wformat-security: NO 00:01:19.567 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.567 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.567 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.567 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.567 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.567 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.567 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.567 Compiler for C supports arguments -Wundef: YES 00:01:19.567 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.567 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.567 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.567 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.567 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.567 Program objdump found: YES (/usr/bin/objdump) 00:01:19.567 Compiler for C supports arguments -mavx512f: YES 00:01:19.567 Checking if "AVX512 checking" compiles: YES 00:01:19.567 Fetching value of define "__SSE4_2__" : 1 00:01:19.567 Fetching value of define "__AES__" : 1 00:01:19.567 Fetching value of define "__AVX__" : 1 00:01:19.567 Fetching value of define "__AVX2__" : (undefined) 00:01:19.567 Fetching value of define "__AVX512BW__" : (undefined) 00:01:19.567 Fetching value of define "__AVX512CD__" : (undefined) 00:01:19.567 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:19.567 Fetching value of define "__AVX512F__" : (undefined) 00:01:19.567 Fetching value of define "__AVX512VL__" : (undefined) 00:01:19.567 Fetching value of define "__PCLMUL__" : 1 00:01:19.567 Fetching value of define "__RDRND__" : 1 00:01:19.567 Fetching value of define "__RDSEED__" : (undefined) 00:01:19.567 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.567 Fetching value of define "__znver1__" : (undefined) 00:01:19.567 Fetching value of define "__znver2__" : (undefined) 00:01:19.567 Fetching value of define "__znver3__" : (undefined) 00:01:19.567 Fetching value of define "__znver4__" : (undefined) 00:01:19.567 Library asan found: YES 00:01:19.567 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.567 Message: lib/log: Defining dependency "log" 00:01:19.567 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.567 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.567 Library rt found: YES 00:01:19.567 Checking for function "getentropy" : NO 00:01:19.567 Message: lib/eal: Defining dependency "eal" 00:01:19.567 Message: lib/ring: Defining dependency "ring" 00:01:19.567 Message: lib/rcu: Defining dependency "rcu" 00:01:19.567 Message: lib/mempool: Defining dependency "mempool" 00:01:19.567 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.567 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.567 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.567 Compiler for C supports arguments -mpclmul: YES 00:01:19.567 Compiler for C supports arguments -maes: YES 00:01:19.567 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.567 Compiler for C supports arguments -mavx512bw: YES 00:01:19.567 Compiler for C supports arguments -mavx512dq: YES 00:01:19.567 Compiler for C supports arguments -mavx512vl: YES 00:01:19.567 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.567 Compiler for C supports arguments -mavx2: YES 00:01:19.567 Compiler for C supports arguments -mavx: YES 00:01:19.567 Message: lib/net: Defining dependency "net" 00:01:19.567 Message: lib/meter: Defining dependency "meter" 00:01:19.567 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.567 Message: lib/pci: Defining dependency "pci" 00:01:19.567 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.567 Message: lib/hash: Defining dependency "hash" 00:01:19.567 Message: lib/timer: Defining dependency "timer" 00:01:19.567 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.567 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.567 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.567 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.567 Message: lib/power: Defining dependency "power" 00:01:19.567 Message: lib/reorder: Defining dependency "reorder" 00:01:19.567 Message: lib/security: Defining dependency "security" 00:01:19.567 Has header "linux/userfaultfd.h" : YES 00:01:19.567 Has header "linux/vduse.h" : YES 00:01:19.567 Message: lib/vhost: Defining dependency "vhost" 00:01:19.567 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.567 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.567 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.567 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.567 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:19.567 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:19.567 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:19.567 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:19.567 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:19.567 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:19.567 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.567 Configuring doxy-api-html.conf using configuration 00:01:19.567 Configuring doxy-api-man.conf using configuration 00:01:19.567 Program mandb found: YES (/usr/bin/mandb) 00:01:19.567 Program sphinx-build found: NO 00:01:19.567 Configuring rte_build_config.h using configuration 00:01:19.567 Message: 00:01:19.567 ================= 00:01:19.567 Applications Enabled 00:01:19.567 ================= 00:01:19.567 00:01:19.567 apps: 00:01:19.567 00:01:19.567 00:01:19.567 Message: 00:01:19.567 ================= 00:01:19.567 Libraries Enabled 00:01:19.567 ================= 00:01:19.567 00:01:19.567 libs: 00:01:19.567 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:19.567 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:19.567 cryptodev, dmadev, power, reorder, security, vhost, 00:01:19.567 00:01:19.567 Message: 00:01:19.567 =============== 00:01:19.567 Drivers Enabled 00:01:19.567 =============== 00:01:19.567 00:01:19.567 common: 00:01:19.567 00:01:19.567 bus: 00:01:19.567 pci, vdev, 00:01:19.567 mempool: 00:01:19.567 ring, 00:01:19.567 dma: 00:01:19.567 00:01:19.567 net: 00:01:19.567 00:01:19.567 crypto: 00:01:19.567 00:01:19.567 compress: 00:01:19.567 00:01:19.567 vdpa: 00:01:19.567 00:01:19.567 00:01:19.567 Message: 00:01:19.567 ================= 00:01:19.567 Content Skipped 00:01:19.567 ================= 00:01:19.567 00:01:19.567 apps: 00:01:19.567 dumpcap: explicitly disabled via build config 00:01:19.567 graph: explicitly disabled via build config 00:01:19.567 pdump: explicitly disabled via build config 00:01:19.567 proc-info: explicitly disabled via build config 00:01:19.567 test-acl: explicitly disabled via build config 00:01:19.567 test-bbdev: explicitly disabled via build config 00:01:19.567 test-cmdline: explicitly disabled via build config 00:01:19.567 test-compress-perf: explicitly disabled via build config 00:01:19.567 test-crypto-perf: explicitly disabled via build config 00:01:19.568 test-dma-perf: explicitly disabled via build config 00:01:19.568 test-eventdev: explicitly disabled via build config 00:01:19.568 test-fib: explicitly disabled via build config 00:01:19.568 test-flow-perf: explicitly disabled via build config 00:01:19.568 test-gpudev: explicitly disabled via build config 00:01:19.568 test-mldev: explicitly disabled via build config 00:01:19.568 test-pipeline: explicitly disabled via build config 00:01:19.568 test-pmd: explicitly disabled via build config 00:01:19.568 test-regex: explicitly disabled via build config 00:01:19.568 test-sad: explicitly disabled via build config 00:01:19.568 test-security-perf: explicitly disabled via build config 00:01:19.568 00:01:19.568 libs: 00:01:19.568 argparse: explicitly disabled via build config 00:01:19.568 metrics: explicitly disabled via build config 00:01:19.568 acl: explicitly disabled via build config 00:01:19.568 bbdev: explicitly disabled via build config 00:01:19.568 bitratestats: explicitly disabled via build config 00:01:19.568 bpf: explicitly disabled via build config 00:01:19.568 cfgfile: explicitly disabled via build config 00:01:19.568 distributor: explicitly disabled via build config 00:01:19.568 efd: explicitly disabled via build config 00:01:19.568 eventdev: explicitly disabled via build config 00:01:19.568 dispatcher: explicitly disabled via build config 00:01:19.568 gpudev: explicitly disabled via build config 00:01:19.568 gro: explicitly disabled via build config 00:01:19.568 gso: explicitly disabled via build config 00:01:19.568 ip_frag: explicitly disabled via build config 00:01:19.568 jobstats: explicitly disabled via build config 00:01:19.568 latencystats: explicitly disabled via build config 00:01:19.568 lpm: explicitly disabled via build config 00:01:19.568 member: explicitly disabled via build config 00:01:19.568 pcapng: explicitly disabled via build config 00:01:19.568 rawdev: explicitly disabled via build config 00:01:19.568 regexdev: explicitly disabled via build config 00:01:19.568 mldev: explicitly disabled via build config 00:01:19.568 rib: explicitly disabled via build config 00:01:19.568 sched: explicitly disabled via build config 00:01:19.568 stack: explicitly disabled via build config 00:01:19.568 ipsec: explicitly disabled via build config 00:01:19.568 pdcp: explicitly disabled via build config 00:01:19.568 fib: explicitly disabled via build config 00:01:19.568 port: explicitly disabled via build config 00:01:19.568 pdump: explicitly disabled via build config 00:01:19.568 table: explicitly disabled via build config 00:01:19.568 pipeline: explicitly disabled via build config 00:01:19.568 graph: explicitly disabled via build config 00:01:19.568 node: explicitly disabled via build config 00:01:19.568 00:01:19.568 drivers: 00:01:19.568 common/cpt: not in enabled drivers build config 00:01:19.568 common/dpaax: not in enabled drivers build config 00:01:19.568 common/iavf: not in enabled drivers build config 00:01:19.568 common/idpf: not in enabled drivers build config 00:01:19.568 common/ionic: not in enabled drivers build config 00:01:19.568 common/mvep: not in enabled drivers build config 00:01:19.568 common/octeontx: not in enabled drivers build config 00:01:19.568 bus/auxiliary: not in enabled drivers build config 00:01:19.568 bus/cdx: not in enabled drivers build config 00:01:19.568 bus/dpaa: not in enabled drivers build config 00:01:19.568 bus/fslmc: not in enabled drivers build config 00:01:19.568 bus/ifpga: not in enabled drivers build config 00:01:19.568 bus/platform: not in enabled drivers build config 00:01:19.568 bus/uacce: not in enabled drivers build config 00:01:19.568 bus/vmbus: not in enabled drivers build config 00:01:19.568 common/cnxk: not in enabled drivers build config 00:01:19.568 common/mlx5: not in enabled drivers build config 00:01:19.568 common/nfp: not in enabled drivers build config 00:01:19.568 common/nitrox: not in enabled drivers build config 00:01:19.568 common/qat: not in enabled drivers build config 00:01:19.568 common/sfc_efx: not in enabled drivers build config 00:01:19.568 mempool/bucket: not in enabled drivers build config 00:01:19.568 mempool/cnxk: not in enabled drivers build config 00:01:19.568 mempool/dpaa: not in enabled drivers build config 00:01:19.568 mempool/dpaa2: not in enabled drivers build config 00:01:19.568 mempool/octeontx: not in enabled drivers build config 00:01:19.568 mempool/stack: not in enabled drivers build config 00:01:19.568 dma/cnxk: not in enabled drivers build config 00:01:19.568 dma/dpaa: not in enabled drivers build config 00:01:19.568 dma/dpaa2: not in enabled drivers build config 00:01:19.568 dma/hisilicon: not in enabled drivers build config 00:01:19.568 dma/idxd: not in enabled drivers build config 00:01:19.568 dma/ioat: not in enabled drivers build config 00:01:19.568 dma/skeleton: not in enabled drivers build config 00:01:19.568 net/af_packet: not in enabled drivers build config 00:01:19.568 net/af_xdp: not in enabled drivers build config 00:01:19.568 net/ark: not in enabled drivers build config 00:01:19.568 net/atlantic: not in enabled drivers build config 00:01:19.568 net/avp: not in enabled drivers build config 00:01:19.568 net/axgbe: not in enabled drivers build config 00:01:19.568 net/bnx2x: not in enabled drivers build config 00:01:19.568 net/bnxt: not in enabled drivers build config 00:01:19.568 net/bonding: not in enabled drivers build config 00:01:19.568 net/cnxk: not in enabled drivers build config 00:01:19.568 net/cpfl: not in enabled drivers build config 00:01:19.568 net/cxgbe: not in enabled drivers build config 00:01:19.568 net/dpaa: not in enabled drivers build config 00:01:19.568 net/dpaa2: not in enabled drivers build config 00:01:19.568 net/e1000: not in enabled drivers build config 00:01:19.568 net/ena: not in enabled drivers build config 00:01:19.568 net/enetc: not in enabled drivers build config 00:01:19.568 net/enetfec: not in enabled drivers build config 00:01:19.568 net/enic: not in enabled drivers build config 00:01:19.568 net/failsafe: not in enabled drivers build config 00:01:19.568 net/fm10k: not in enabled drivers build config 00:01:19.568 net/gve: not in enabled drivers build config 00:01:19.568 net/hinic: not in enabled drivers build config 00:01:19.568 net/hns3: not in enabled drivers build config 00:01:19.568 net/i40e: not in enabled drivers build config 00:01:19.568 net/iavf: not in enabled drivers build config 00:01:19.568 net/ice: not in enabled drivers build config 00:01:19.568 net/idpf: not in enabled drivers build config 00:01:19.568 net/igc: not in enabled drivers build config 00:01:19.568 net/ionic: not in enabled drivers build config 00:01:19.568 net/ipn3ke: not in enabled drivers build config 00:01:19.568 net/ixgbe: not in enabled drivers build config 00:01:19.568 net/mana: not in enabled drivers build config 00:01:19.568 net/memif: not in enabled drivers build config 00:01:19.568 net/mlx4: not in enabled drivers build config 00:01:19.568 net/mlx5: not in enabled drivers build config 00:01:19.568 net/mvneta: not in enabled drivers build config 00:01:19.568 net/mvpp2: not in enabled drivers build config 00:01:19.568 net/netvsc: not in enabled drivers build config 00:01:19.568 net/nfb: not in enabled drivers build config 00:01:19.568 net/nfp: not in enabled drivers build config 00:01:19.568 net/ngbe: not in enabled drivers build config 00:01:19.568 net/null: not in enabled drivers build config 00:01:19.568 net/octeontx: not in enabled drivers build config 00:01:19.568 net/octeon_ep: not in enabled drivers build config 00:01:19.568 net/pcap: not in enabled drivers build config 00:01:19.568 net/pfe: not in enabled drivers build config 00:01:19.568 net/qede: not in enabled drivers build config 00:01:19.568 net/ring: not in enabled drivers build config 00:01:19.568 net/sfc: not in enabled drivers build config 00:01:19.568 net/softnic: not in enabled drivers build config 00:01:19.568 net/tap: not in enabled drivers build config 00:01:19.568 net/thunderx: not in enabled drivers build config 00:01:19.568 net/txgbe: not in enabled drivers build config 00:01:19.568 net/vdev_netvsc: not in enabled drivers build config 00:01:19.568 net/vhost: not in enabled drivers build config 00:01:19.568 net/virtio: not in enabled drivers build config 00:01:19.568 net/vmxnet3: not in enabled drivers build config 00:01:19.568 raw/*: missing internal dependency, "rawdev" 00:01:19.568 crypto/armv8: not in enabled drivers build config 00:01:19.568 crypto/bcmfs: not in enabled drivers build config 00:01:19.568 crypto/caam_jr: not in enabled drivers build config 00:01:19.568 crypto/ccp: not in enabled drivers build config 00:01:19.568 crypto/cnxk: not in enabled drivers build config 00:01:19.568 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.568 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.568 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.568 crypto/mlx5: not in enabled drivers build config 00:01:19.568 crypto/mvsam: not in enabled drivers build config 00:01:19.568 crypto/nitrox: not in enabled drivers build config 00:01:19.568 crypto/null: not in enabled drivers build config 00:01:19.568 crypto/octeontx: not in enabled drivers build config 00:01:19.568 crypto/openssl: not in enabled drivers build config 00:01:19.568 crypto/scheduler: not in enabled drivers build config 00:01:19.568 crypto/uadk: not in enabled drivers build config 00:01:19.568 crypto/virtio: not in enabled drivers build config 00:01:19.568 compress/isal: not in enabled drivers build config 00:01:19.568 compress/mlx5: not in enabled drivers build config 00:01:19.568 compress/nitrox: not in enabled drivers build config 00:01:19.568 compress/octeontx: not in enabled drivers build config 00:01:19.568 compress/zlib: not in enabled drivers build config 00:01:19.568 regex/*: missing internal dependency, "regexdev" 00:01:19.568 ml/*: missing internal dependency, "mldev" 00:01:19.568 vdpa/ifc: not in enabled drivers build config 00:01:19.568 vdpa/mlx5: not in enabled drivers build config 00:01:19.568 vdpa/nfp: not in enabled drivers build config 00:01:19.568 vdpa/sfc: not in enabled drivers build config 00:01:19.568 event/*: missing internal dependency, "eventdev" 00:01:19.568 baseband/*: missing internal dependency, "bbdev" 00:01:19.568 gpu/*: missing internal dependency, "gpudev" 00:01:19.568 00:01:19.568 00:01:19.568 Build targets in project: 85 00:01:19.568 00:01:19.568 DPDK 24.03.0 00:01:19.568 00:01:19.568 User defined options 00:01:19.568 buildtype : debug 00:01:19.568 default_library : shared 00:01:19.568 libdir : lib 00:01:19.568 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:19.568 b_sanitize : address 00:01:19.568 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:19.568 c_link_args : 00:01:19.568 cpu_instruction_set: native 00:01:19.568 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:19.568 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:19.568 enable_docs : false 00:01:19.568 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:19.568 enable_kmods : false 00:01:19.568 max_lcores : 128 00:01:19.569 tests : false 00:01:19.569 00:01:19.569 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:20.146 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:20.146 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:20.146 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:20.146 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:20.146 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:20.146 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:20.146 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:20.146 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:20.146 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:20.146 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:20.146 [10/268] Linking static target lib/librte_kvargs.a 00:01:20.146 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:20.146 [12/268] Linking static target lib/librte_log.a 00:01:20.405 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:20.405 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:20.405 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:20.405 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:20.985 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.985 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:20.985 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:20.985 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:20.985 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:20.986 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:20.986 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:20.986 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:20.986 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:20.986 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:20.986 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:20.986 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:20.986 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.247 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:21.247 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:21.247 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:21.247 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:21.247 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:21.247 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:21.247 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:21.247 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:21.247 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:21.247 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:21.247 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:21.247 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:21.247 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:21.247 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:21.247 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:21.247 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:21.247 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:21.247 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:21.247 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:21.247 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:21.248 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:21.248 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:21.248 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:21.248 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:21.248 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:21.248 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:21.248 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:21.248 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:21.248 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:21.509 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:21.509 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:21.509 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:21.509 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:21.509 [63/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:21.509 [64/268] Linking static target lib/librte_telemetry.a 00:01:21.509 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.771 [66/268] Linking target lib/librte_log.so.24.1 00:01:21.771 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:21.771 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:22.036 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:22.036 [70/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:22.036 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:22.036 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:22.036 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:22.036 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:22.036 [75/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:22.036 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:22.036 [77/268] Linking static target lib/librte_pci.a 00:01:22.036 [78/268] Linking target lib/librte_kvargs.so.24.1 00:01:22.036 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:22.036 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:22.036 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:22.036 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:22.036 [83/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:22.036 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:22.036 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:22.298 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:22.298 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:22.298 [88/268] Linking static target lib/librte_meter.a 00:01:22.298 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:22.298 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:22.298 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:22.298 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:22.298 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:22.298 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:22.298 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:22.298 [96/268] Linking static target lib/librte_ring.a 00:01:22.298 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:22.298 [98/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:22.298 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:22.298 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:22.298 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:22.298 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:22.298 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:22.298 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:22.298 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:22.298 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:22.298 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:22.298 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:22.559 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:22.559 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:22.559 [111/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:22.559 [112/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:22.559 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:22.559 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.559 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:22.559 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.559 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:22.559 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.559 [119/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.559 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:22.559 [121/268] Linking static target lib/librte_mempool.a 00:01:22.559 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:22.559 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.559 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.823 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.823 [126/268] Linking target lib/librte_telemetry.so.24.1 00:01:22.823 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:22.823 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.823 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:22.823 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.823 [131/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.081 [132/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:23.081 [133/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:23.081 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:23.081 [135/268] Linking static target lib/librte_rcu.a 00:01:23.081 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:23.081 [137/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:23.081 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:23.081 [139/268] Linking static target lib/librte_cmdline.a 00:01:23.081 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:23.081 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:23.081 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:23.081 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:23.081 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:23.344 [145/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:23.344 [146/268] Linking static target lib/librte_eal.a 00:01:23.344 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.344 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:23.344 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:23.344 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:23.344 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:23.344 [152/268] Linking static target lib/librte_timer.a 00:01:23.344 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:23.344 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:23.607 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.607 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:23.607 [157/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.607 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:23.607 [159/268] Linking static target lib/librte_dmadev.a 00:01:23.866 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.866 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:23.866 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:23.866 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:23.866 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.866 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:23.866 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:23.866 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.866 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:23.866 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:24.124 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:24.124 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:24.124 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:24.124 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:24.124 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:24.124 [175/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:24.124 [176/268] Linking static target lib/librte_net.a 00:01:24.124 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.124 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.124 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:24.124 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:24.124 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:24.124 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:24.124 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:24.124 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:24.124 [185/268] Linking static target lib/librte_power.a 00:01:24.382 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.382 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.382 [188/268] Linking static target drivers/librte_bus_vdev.a 00:01:24.382 [189/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.382 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:24.382 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:24.382 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:24.382 [193/268] Linking static target lib/librte_hash.a 00:01:24.382 [194/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:24.382 [195/268] Linking static target lib/librte_compressdev.a 00:01:24.382 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:24.382 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:24.640 [198/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.640 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:24.640 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:24.640 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:24.640 [202/268] Linking static target drivers/librte_bus_pci.a 00:01:24.640 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:24.640 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:24.640 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:24.640 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.640 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.640 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:24.898 [209/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.898 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.898 [211/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:24.898 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.898 [213/268] Linking static target lib/librte_reorder.a 00:01:25.158 [214/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.158 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.415 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.415 [217/268] Linking static target lib/librte_security.a 00:01:25.980 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.980 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.544 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.544 [221/268] Linking static target lib/librte_mbuf.a 00:01:26.802 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:26.802 [223/268] Linking static target lib/librte_cryptodev.a 00:01:27.058 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.623 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.880 [226/268] Linking static target lib/librte_ethdev.a 00:01:27.880 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.306 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.306 [229/268] Linking target lib/librte_eal.so.24.1 00:01:29.565 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:29.565 [231/268] Linking target lib/librte_meter.so.24.1 00:01:29.565 [232/268] Linking target lib/librte_ring.so.24.1 00:01:29.565 [233/268] Linking target lib/librte_pci.so.24.1 00:01:29.565 [234/268] Linking target lib/librte_timer.so.24.1 00:01:29.565 [235/268] Linking target lib/librte_dmadev.so.24.1 00:01:29.565 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:29.565 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:29.565 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:29.565 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:29.565 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:29.565 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:29.565 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:29.565 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:29.565 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:29.823 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:29.823 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:29.823 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:29.823 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:30.082 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:30.082 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:30.082 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:30.082 [252/268] Linking target lib/librte_net.so.24.1 00:01:30.082 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:30.082 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:30.082 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:30.340 [256/268] Linking target lib/librte_cmdline.so.24.1 00:01:30.340 [257/268] Linking target lib/librte_hash.so.24.1 00:01:30.340 [258/268] Linking target lib/librte_security.so.24.1 00:01:30.340 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:30.598 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.498 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.498 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:32.498 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:32.498 [264/268] Linking target lib/librte_power.so.24.1 00:01:54.417 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.417 [266/268] Linking static target lib/librte_vhost.a 00:01:54.676 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.934 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:54.934 INFO: autodetecting backend as ninja 00:01:54.934 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:55.870 CC lib/ut/ut.o 00:01:55.870 CC lib/ut_mock/mock.o 00:01:55.870 CC lib/log/log.o 00:01:55.870 CC lib/log/log_flags.o 00:01:55.870 CC lib/log/log_deprecated.o 00:01:55.870 LIB libspdk_log.a 00:01:55.870 LIB libspdk_ut.a 00:01:55.870 LIB libspdk_ut_mock.a 00:01:56.129 SO libspdk_ut.so.2.0 00:01:56.129 SO libspdk_ut_mock.so.6.0 00:01:56.129 SO libspdk_log.so.7.0 00:01:56.129 SYMLINK libspdk_ut_mock.so 00:01:56.129 SYMLINK libspdk_ut.so 00:01:56.129 SYMLINK libspdk_log.so 00:01:56.129 CXX lib/trace_parser/trace.o 00:01:56.129 CC lib/ioat/ioat.o 00:01:56.129 CC lib/dma/dma.o 00:01:56.129 CC lib/util/base64.o 00:01:56.129 CC lib/util/bit_array.o 00:01:56.129 CC lib/util/cpuset.o 00:01:56.129 CC lib/util/crc16.o 00:01:56.129 CC lib/util/crc32.o 00:01:56.129 CC lib/util/crc32c.o 00:01:56.129 CC lib/util/crc32_ieee.o 00:01:56.129 CC lib/util/crc64.o 00:01:56.129 CC lib/util/dif.o 00:01:56.129 CC lib/util/fd.o 00:01:56.129 CC lib/util/file.o 00:01:56.129 CC lib/util/hexlify.o 00:01:56.129 CC lib/util/iov.o 00:01:56.129 CC lib/util/math.o 00:01:56.129 CC lib/util/pipe.o 00:01:56.129 CC lib/util/strerror_tls.o 00:01:56.129 CC lib/util/string.o 00:01:56.129 CC lib/util/uuid.o 00:01:56.129 CC lib/util/fd_group.o 00:01:56.129 CC lib/util/xor.o 00:01:56.129 CC lib/util/zipf.o 00:01:56.387 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.387 CC lib/vfio_user/host/vfio_user.o 00:01:56.387 LIB libspdk_dma.a 00:01:56.387 SO libspdk_dma.so.4.0 00:01:56.645 SYMLINK libspdk_dma.so 00:01:56.645 LIB libspdk_ioat.a 00:01:56.645 SO libspdk_ioat.so.7.0 00:01:56.645 SYMLINK libspdk_ioat.so 00:01:56.645 LIB libspdk_vfio_user.a 00:01:56.645 SO libspdk_vfio_user.so.5.0 00:01:56.645 SYMLINK libspdk_vfio_user.so 00:01:56.901 LIB libspdk_util.a 00:01:57.158 SO libspdk_util.so.9.1 00:01:57.158 SYMLINK libspdk_util.so 00:01:57.417 CC lib/rdma_utils/rdma_utils.o 00:01:57.417 CC lib/rdma_provider/common.o 00:01:57.417 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:57.417 CC lib/vmd/vmd.o 00:01:57.417 CC lib/conf/conf.o 00:01:57.417 CC lib/vmd/led.o 00:01:57.417 CC lib/env_dpdk/env.o 00:01:57.417 CC lib/env_dpdk/memory.o 00:01:57.417 CC lib/env_dpdk/pci.o 00:01:57.417 CC lib/json/json_parse.o 00:01:57.417 CC lib/env_dpdk/init.o 00:01:57.417 CC lib/idxd/idxd.o 00:01:57.417 CC lib/env_dpdk/threads.o 00:01:57.417 CC lib/json/json_util.o 00:01:57.417 CC lib/idxd/idxd_user.o 00:01:57.417 CC lib/env_dpdk/pci_ioat.o 00:01:57.417 CC lib/json/json_write.o 00:01:57.417 CC lib/idxd/idxd_kernel.o 00:01:57.417 CC lib/env_dpdk/pci_virtio.o 00:01:57.417 CC lib/env_dpdk/pci_vmd.o 00:01:57.417 CC lib/env_dpdk/pci_idxd.o 00:01:57.417 CC lib/env_dpdk/sigbus_handler.o 00:01:57.417 CC lib/env_dpdk/pci_event.o 00:01:57.417 CC lib/env_dpdk/pci_dpdk.o 00:01:57.417 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.417 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.417 LIB libspdk_trace_parser.a 00:01:57.417 SO libspdk_trace_parser.so.5.0 00:01:57.675 SYMLINK libspdk_trace_parser.so 00:01:57.675 LIB libspdk_conf.a 00:01:57.675 SO libspdk_conf.so.6.0 00:01:57.675 LIB libspdk_rdma_utils.a 00:01:57.675 LIB libspdk_rdma_provider.a 00:01:57.675 SO libspdk_rdma_utils.so.1.0 00:01:57.675 SO libspdk_rdma_provider.so.6.0 00:01:57.675 SYMLINK libspdk_conf.so 00:01:57.675 LIB libspdk_json.a 00:01:57.675 SO libspdk_json.so.6.0 00:01:57.675 SYMLINK libspdk_rdma_utils.so 00:01:57.934 SYMLINK libspdk_rdma_provider.so 00:01:57.934 SYMLINK libspdk_json.so 00:01:57.934 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.934 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.934 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.934 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.193 LIB libspdk_idxd.a 00:01:58.193 SO libspdk_idxd.so.12.0 00:01:58.193 LIB libspdk_vmd.a 00:01:58.193 SYMLINK libspdk_idxd.so 00:01:58.194 SO libspdk_vmd.so.6.0 00:01:58.194 LIB libspdk_jsonrpc.a 00:01:58.451 SO libspdk_jsonrpc.so.6.0 00:01:58.451 SYMLINK libspdk_vmd.so 00:01:58.451 SYMLINK libspdk_jsonrpc.so 00:01:58.710 CC lib/rpc/rpc.o 00:01:58.710 LIB libspdk_rpc.a 00:01:58.969 SO libspdk_rpc.so.6.0 00:01:58.969 SYMLINK libspdk_rpc.so 00:01:58.969 CC lib/notify/notify.o 00:01:58.969 CC lib/notify/notify_rpc.o 00:01:58.969 CC lib/trace/trace.o 00:01:58.969 CC lib/keyring/keyring.o 00:01:58.969 CC lib/trace/trace_flags.o 00:01:58.969 CC lib/keyring/keyring_rpc.o 00:01:58.969 CC lib/trace/trace_rpc.o 00:01:59.226 LIB libspdk_notify.a 00:01:59.226 SO libspdk_notify.so.6.0 00:01:59.226 SYMLINK libspdk_notify.so 00:01:59.226 LIB libspdk_keyring.a 00:01:59.226 SO libspdk_keyring.so.1.0 00:01:59.226 LIB libspdk_trace.a 00:01:59.484 SO libspdk_trace.so.10.0 00:01:59.484 SYMLINK libspdk_keyring.so 00:01:59.484 SYMLINK libspdk_trace.so 00:01:59.484 CC lib/sock/sock.o 00:01:59.484 CC lib/thread/thread.o 00:01:59.484 CC lib/sock/sock_rpc.o 00:01:59.484 CC lib/thread/iobuf.o 00:02:00.051 LIB libspdk_sock.a 00:02:00.051 SO libspdk_sock.so.10.0 00:02:00.051 SYMLINK libspdk_sock.so 00:02:00.328 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.328 CC lib/nvme/nvme_ctrlr.o 00:02:00.328 CC lib/nvme/nvme_fabric.o 00:02:00.328 CC lib/nvme/nvme_ns_cmd.o 00:02:00.328 CC lib/nvme/nvme_ns.o 00:02:00.328 CC lib/nvme/nvme_pcie_common.o 00:02:00.328 CC lib/nvme/nvme_pcie.o 00:02:00.328 CC lib/nvme/nvme_qpair.o 00:02:00.328 CC lib/nvme/nvme.o 00:02:00.328 CC lib/nvme/nvme_quirks.o 00:02:00.328 CC lib/nvme/nvme_transport.o 00:02:00.328 CC lib/nvme/nvme_discovery.o 00:02:00.328 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.328 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.328 CC lib/nvme/nvme_opal.o 00:02:00.328 CC lib/nvme/nvme_tcp.o 00:02:00.328 CC lib/nvme/nvme_io_msg.o 00:02:00.328 CC lib/nvme/nvme_poll_group.o 00:02:00.328 CC lib/nvme/nvme_zns.o 00:02:00.328 CC lib/nvme/nvme_stubs.o 00:02:00.328 CC lib/nvme/nvme_auth.o 00:02:00.328 CC lib/nvme/nvme_cuse.o 00:02:00.328 CC lib/nvme/nvme_rdma.o 00:02:00.328 LIB libspdk_env_dpdk.a 00:02:00.597 SO libspdk_env_dpdk.so.14.1 00:02:00.855 SYMLINK libspdk_env_dpdk.so 00:02:01.787 LIB libspdk_thread.a 00:02:01.787 SO libspdk_thread.so.10.1 00:02:01.787 SYMLINK libspdk_thread.so 00:02:02.046 CC lib/accel/accel.o 00:02:02.046 CC lib/virtio/virtio.o 00:02:02.046 CC lib/blob/blobstore.o 00:02:02.046 CC lib/accel/accel_rpc.o 00:02:02.046 CC lib/virtio/virtio_vhost_user.o 00:02:02.046 CC lib/blob/request.o 00:02:02.046 CC lib/accel/accel_sw.o 00:02:02.046 CC lib/init/json_config.o 00:02:02.046 CC lib/blob/zeroes.o 00:02:02.046 CC lib/virtio/virtio_vfio_user.o 00:02:02.046 CC lib/init/subsystem.o 00:02:02.046 CC lib/virtio/virtio_pci.o 00:02:02.046 CC lib/blob/blob_bs_dev.o 00:02:02.046 CC lib/init/subsystem_rpc.o 00:02:02.046 CC lib/init/rpc.o 00:02:02.304 LIB libspdk_init.a 00:02:02.304 SO libspdk_init.so.5.0 00:02:02.304 SYMLINK libspdk_init.so 00:02:02.304 LIB libspdk_virtio.a 00:02:02.304 SO libspdk_virtio.so.7.0 00:02:02.561 SYMLINK libspdk_virtio.so 00:02:02.561 CC lib/event/app.o 00:02:02.561 CC lib/event/reactor.o 00:02:02.561 CC lib/event/log_rpc.o 00:02:02.561 CC lib/event/app_rpc.o 00:02:02.561 CC lib/event/scheduler_static.o 00:02:03.127 LIB libspdk_event.a 00:02:03.127 SO libspdk_event.so.14.0 00:02:03.127 SYMLINK libspdk_event.so 00:02:03.127 LIB libspdk_accel.a 00:02:03.385 LIB libspdk_nvme.a 00:02:03.385 SO libspdk_accel.so.15.1 00:02:03.385 SYMLINK libspdk_accel.so 00:02:03.385 SO libspdk_nvme.so.13.1 00:02:03.642 CC lib/bdev/bdev.o 00:02:03.642 CC lib/bdev/bdev_rpc.o 00:02:03.642 CC lib/bdev/bdev_zone.o 00:02:03.642 CC lib/bdev/part.o 00:02:03.642 CC lib/bdev/scsi_nvme.o 00:02:03.642 SYMLINK libspdk_nvme.so 00:02:06.167 LIB libspdk_blob.a 00:02:06.167 SO libspdk_blob.so.11.0 00:02:06.167 SYMLINK libspdk_blob.so 00:02:06.167 CC lib/lvol/lvol.o 00:02:06.167 CC lib/blobfs/blobfs.o 00:02:06.167 CC lib/blobfs/tree.o 00:02:06.730 LIB libspdk_bdev.a 00:02:06.730 SO libspdk_bdev.so.15.1 00:02:06.988 SYMLINK libspdk_bdev.so 00:02:06.988 CC lib/ublk/ublk.o 00:02:06.988 CC lib/nvmf/ctrlr.o 00:02:06.988 CC lib/ublk/ublk_rpc.o 00:02:06.988 CC lib/nvmf/ctrlr_discovery.o 00:02:06.988 CC lib/scsi/dev.o 00:02:06.988 CC lib/nvmf/ctrlr_bdev.o 00:02:06.988 CC lib/scsi/lun.o 00:02:06.988 CC lib/nvmf/subsystem.o 00:02:06.988 CC lib/scsi/port.o 00:02:06.988 CC lib/nvmf/nvmf.o 00:02:06.988 CC lib/nbd/nbd.o 00:02:06.988 CC lib/scsi/scsi.o 00:02:06.988 CC lib/nvmf/nvmf_rpc.o 00:02:06.988 CC lib/scsi/scsi_bdev.o 00:02:06.988 CC lib/nbd/nbd_rpc.o 00:02:06.988 CC lib/ftl/ftl_core.o 00:02:06.988 CC lib/nvmf/transport.o 00:02:06.988 CC lib/scsi/scsi_pr.o 00:02:06.988 CC lib/scsi/scsi_rpc.o 00:02:06.988 CC lib/nvmf/tcp.o 00:02:06.988 CC lib/ftl/ftl_init.o 00:02:06.988 CC lib/ftl/ftl_layout.o 00:02:06.988 CC lib/scsi/task.o 00:02:06.988 CC lib/nvmf/stubs.o 00:02:06.988 CC lib/nvmf/mdns_server.o 00:02:06.988 CC lib/ftl/ftl_debug.o 00:02:06.988 CC lib/ftl/ftl_io.o 00:02:06.988 CC lib/nvmf/rdma.o 00:02:06.988 CC lib/ftl/ftl_sb.o 00:02:06.988 CC lib/ftl/ftl_l2p.o 00:02:06.988 CC lib/nvmf/auth.o 00:02:06.988 CC lib/ftl/ftl_l2p_flat.o 00:02:06.988 CC lib/ftl/ftl_nv_cache.o 00:02:06.988 CC lib/ftl/ftl_band.o 00:02:06.988 CC lib/ftl/ftl_band_ops.o 00:02:06.988 CC lib/ftl/ftl_writer.o 00:02:06.988 CC lib/ftl/ftl_rq.o 00:02:06.988 CC lib/ftl/ftl_reloc.o 00:02:06.988 CC lib/ftl/ftl_l2p_cache.o 00:02:06.988 CC lib/ftl/ftl_p2l.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.988 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.560 LIB libspdk_blobfs.a 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.560 SO libspdk_blobfs.so.10.0 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.560 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.560 CC lib/ftl/utils/ftl_conf.o 00:02:07.560 CC lib/ftl/utils/ftl_md.o 00:02:07.560 CC lib/ftl/utils/ftl_mempool.o 00:02:07.560 CC lib/ftl/utils/ftl_bitmap.o 00:02:07.560 CC lib/ftl/utils/ftl_property.o 00:02:07.560 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:07.560 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:07.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:07.560 SYMLINK libspdk_blobfs.so 00:02:07.560 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:07.560 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:07.819 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.819 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:07.819 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:07.819 LIB libspdk_lvol.a 00:02:07.819 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:07.819 SO libspdk_lvol.so.10.0 00:02:07.819 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:07.819 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:07.819 CC lib/ftl/base/ftl_base_dev.o 00:02:07.819 CC lib/ftl/base/ftl_base_bdev.o 00:02:07.819 CC lib/ftl/ftl_trace.o 00:02:07.819 SYMLINK libspdk_lvol.so 00:02:08.077 LIB libspdk_nbd.a 00:02:08.077 SO libspdk_nbd.so.7.0 00:02:08.077 SYMLINK libspdk_nbd.so 00:02:08.335 LIB libspdk_scsi.a 00:02:08.335 SO libspdk_scsi.so.9.0 00:02:08.335 LIB libspdk_ublk.a 00:02:08.335 SYMLINK libspdk_scsi.so 00:02:08.335 SO libspdk_ublk.so.3.0 00:02:08.594 SYMLINK libspdk_ublk.so 00:02:08.594 CC lib/iscsi/conn.o 00:02:08.594 CC lib/iscsi/init_grp.o 00:02:08.594 CC lib/vhost/vhost.o 00:02:08.594 CC lib/iscsi/iscsi.o 00:02:08.594 CC lib/vhost/vhost_rpc.o 00:02:08.594 CC lib/iscsi/md5.o 00:02:08.594 CC lib/vhost/vhost_scsi.o 00:02:08.594 CC lib/iscsi/param.o 00:02:08.594 CC lib/vhost/vhost_blk.o 00:02:08.594 CC lib/iscsi/portal_grp.o 00:02:08.594 CC lib/vhost/rte_vhost_user.o 00:02:08.594 CC lib/iscsi/tgt_node.o 00:02:08.594 CC lib/iscsi/iscsi_subsystem.o 00:02:08.594 CC lib/iscsi/iscsi_rpc.o 00:02:08.594 CC lib/iscsi/task.o 00:02:08.852 LIB libspdk_ftl.a 00:02:09.111 SO libspdk_ftl.so.9.0 00:02:09.678 SYMLINK libspdk_ftl.so 00:02:09.936 LIB libspdk_vhost.a 00:02:09.936 SO libspdk_vhost.so.8.0 00:02:10.194 SYMLINK libspdk_vhost.so 00:02:10.452 LIB libspdk_iscsi.a 00:02:10.452 SO libspdk_iscsi.so.8.0 00:02:10.452 LIB libspdk_nvmf.a 00:02:10.711 SO libspdk_nvmf.so.18.1 00:02:10.711 SYMLINK libspdk_iscsi.so 00:02:10.711 SYMLINK libspdk_nvmf.so 00:02:10.969 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.226 CC module/sock/posix/posix.o 00:02:11.226 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.226 CC module/keyring/file/keyring.o 00:02:11.226 CC module/accel/dsa/accel_dsa.o 00:02:11.226 CC module/accel/iaa/accel_iaa.o 00:02:11.227 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.227 CC module/keyring/file/keyring_rpc.o 00:02:11.227 CC module/accel/error/accel_error.o 00:02:11.227 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.227 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.227 CC module/blob/bdev/blob_bdev.o 00:02:11.227 CC module/accel/error/accel_error_rpc.o 00:02:11.227 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.227 CC module/keyring/linux/keyring.o 00:02:11.227 CC module/accel/ioat/accel_ioat.o 00:02:11.227 CC module/keyring/linux/keyring_rpc.o 00:02:11.227 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.227 LIB libspdk_env_dpdk_rpc.a 00:02:11.227 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.227 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.485 LIB libspdk_keyring_linux.a 00:02:11.485 LIB libspdk_keyring_file.a 00:02:11.485 LIB libspdk_scheduler_gscheduler.a 00:02:11.485 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.485 SO libspdk_keyring_file.so.1.0 00:02:11.485 SO libspdk_keyring_linux.so.1.0 00:02:11.485 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.485 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.485 LIB libspdk_accel_error.a 00:02:11.485 LIB libspdk_accel_ioat.a 00:02:11.485 LIB libspdk_scheduler_dynamic.a 00:02:11.485 SO libspdk_accel_error.so.2.0 00:02:11.485 LIB libspdk_accel_iaa.a 00:02:11.485 SO libspdk_accel_ioat.so.6.0 00:02:11.485 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.485 SYMLINK libspdk_keyring_linux.so 00:02:11.485 SYMLINK libspdk_keyring_file.so 00:02:11.485 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.485 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.485 SO libspdk_accel_iaa.so.3.0 00:02:11.485 SYMLINK libspdk_accel_error.so 00:02:11.485 SYMLINK libspdk_accel_ioat.so 00:02:11.485 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.485 LIB libspdk_blob_bdev.a 00:02:11.485 LIB libspdk_accel_dsa.a 00:02:11.485 SYMLINK libspdk_accel_iaa.so 00:02:11.485 SO libspdk_blob_bdev.so.11.0 00:02:11.485 SO libspdk_accel_dsa.so.5.0 00:02:11.485 SYMLINK libspdk_blob_bdev.so 00:02:11.485 SYMLINK libspdk_accel_dsa.so 00:02:11.748 CC module/bdev/delay/vbdev_delay.o 00:02:11.748 CC module/bdev/malloc/bdev_malloc.o 00:02:11.748 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.748 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.748 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.748 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.748 CC module/bdev/gpt/gpt.o 00:02:11.748 CC module/bdev/aio/bdev_aio.o 00:02:11.748 CC module/bdev/null/bdev_null.o 00:02:11.748 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.748 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.748 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.748 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme.o 00:02:11.748 CC module/bdev/null/bdev_null_rpc.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.748 CC module/bdev/error/vbdev_error.o 00:02:11.748 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.748 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.748 CC module/bdev/split/vbdev_split.o 00:02:11.748 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.748 CC module/bdev/raid/bdev_raid.o 00:02:11.748 CC module/bdev/nvme/nvme_rpc.o 00:02:11.748 CC module/bdev/ftl/bdev_ftl.o 00:02:11.748 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.748 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.748 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.748 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.748 CC module/bdev/nvme/vbdev_opal.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.748 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.748 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.748 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.748 CC module/bdev/raid/raid0.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.748 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.748 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.748 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.748 CC module/bdev/raid/raid1.o 00:02:11.748 CC module/bdev/raid/concat.o 00:02:12.316 LIB libspdk_blobfs_bdev.a 00:02:12.316 SO libspdk_blobfs_bdev.so.6.0 00:02:12.316 LIB libspdk_bdev_split.a 00:02:12.316 SYMLINK libspdk_blobfs_bdev.so 00:02:12.316 LIB libspdk_bdev_gpt.a 00:02:12.316 SO libspdk_bdev_split.so.6.0 00:02:12.316 LIB libspdk_sock_posix.a 00:02:12.316 SO libspdk_bdev_gpt.so.6.0 00:02:12.316 LIB libspdk_bdev_ftl.a 00:02:12.316 SO libspdk_sock_posix.so.6.0 00:02:12.316 LIB libspdk_bdev_passthru.a 00:02:12.316 LIB libspdk_bdev_null.a 00:02:12.316 SO libspdk_bdev_ftl.so.6.0 00:02:12.316 SYMLINK libspdk_bdev_split.so 00:02:12.316 LIB libspdk_bdev_error.a 00:02:12.316 SO libspdk_bdev_passthru.so.6.0 00:02:12.316 LIB libspdk_bdev_delay.a 00:02:12.316 SYMLINK libspdk_bdev_gpt.so 00:02:12.316 SO libspdk_bdev_null.so.6.0 00:02:12.316 SO libspdk_bdev_error.so.6.0 00:02:12.574 SO libspdk_bdev_delay.so.6.0 00:02:12.574 SYMLINK libspdk_sock_posix.so 00:02:12.574 LIB libspdk_bdev_zone_block.a 00:02:12.574 SYMLINK libspdk_bdev_ftl.so 00:02:12.574 SYMLINK libspdk_bdev_passthru.so 00:02:12.574 LIB libspdk_bdev_aio.a 00:02:12.574 SO libspdk_bdev_zone_block.so.6.0 00:02:12.574 LIB libspdk_bdev_iscsi.a 00:02:12.574 SYMLINK libspdk_bdev_null.so 00:02:12.574 SYMLINK libspdk_bdev_error.so 00:02:12.574 SYMLINK libspdk_bdev_delay.so 00:02:12.574 SO libspdk_bdev_aio.so.6.0 00:02:12.574 SO libspdk_bdev_iscsi.so.6.0 00:02:12.574 LIB libspdk_bdev_malloc.a 00:02:12.574 SYMLINK libspdk_bdev_zone_block.so 00:02:12.574 SO libspdk_bdev_malloc.so.6.0 00:02:12.574 SYMLINK libspdk_bdev_aio.so 00:02:12.574 SYMLINK libspdk_bdev_iscsi.so 00:02:12.574 SYMLINK libspdk_bdev_malloc.so 00:02:12.574 LIB libspdk_bdev_lvol.a 00:02:12.833 SO libspdk_bdev_lvol.so.6.0 00:02:12.833 SYMLINK libspdk_bdev_lvol.so 00:02:12.833 LIB libspdk_bdev_virtio.a 00:02:12.833 SO libspdk_bdev_virtio.so.6.0 00:02:12.833 SYMLINK libspdk_bdev_virtio.so 00:02:13.400 LIB libspdk_bdev_raid.a 00:02:13.400 SO libspdk_bdev_raid.so.6.0 00:02:13.400 SYMLINK libspdk_bdev_raid.so 00:02:14.812 LIB libspdk_bdev_nvme.a 00:02:14.812 SO libspdk_bdev_nvme.so.7.0 00:02:15.071 SYMLINK libspdk_bdev_nvme.so 00:02:15.330 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.330 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.330 CC module/event/subsystems/keyring/keyring.o 00:02:15.330 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.330 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.330 CC module/event/subsystems/vmd/vmd.o 00:02:15.330 CC module/event/subsystems/sock/sock.o 00:02:15.330 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.588 LIB libspdk_event_keyring.a 00:02:15.588 LIB libspdk_event_vhost_blk.a 00:02:15.588 LIB libspdk_event_scheduler.a 00:02:15.588 LIB libspdk_event_vmd.a 00:02:15.588 LIB libspdk_event_sock.a 00:02:15.588 SO libspdk_event_keyring.so.1.0 00:02:15.588 LIB libspdk_event_iobuf.a 00:02:15.588 SO libspdk_event_vhost_blk.so.3.0 00:02:15.588 SO libspdk_event_scheduler.so.4.0 00:02:15.588 SO libspdk_event_sock.so.5.0 00:02:15.588 SO libspdk_event_vmd.so.6.0 00:02:15.588 SO libspdk_event_iobuf.so.3.0 00:02:15.588 SYMLINK libspdk_event_keyring.so 00:02:15.588 SYMLINK libspdk_event_vhost_blk.so 00:02:15.588 SYMLINK libspdk_event_scheduler.so 00:02:15.588 SYMLINK libspdk_event_sock.so 00:02:15.588 SYMLINK libspdk_event_vmd.so 00:02:15.588 SYMLINK libspdk_event_iobuf.so 00:02:15.845 CC module/event/subsystems/accel/accel.o 00:02:16.103 LIB libspdk_event_accel.a 00:02:16.103 SO libspdk_event_accel.so.6.0 00:02:16.103 SYMLINK libspdk_event_accel.so 00:02:16.360 CC module/event/subsystems/bdev/bdev.o 00:02:16.360 LIB libspdk_event_bdev.a 00:02:16.360 SO libspdk_event_bdev.so.6.0 00:02:16.617 SYMLINK libspdk_event_bdev.so 00:02:16.617 CC module/event/subsystems/scsi/scsi.o 00:02:16.617 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.617 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.617 CC module/event/subsystems/nbd/nbd.o 00:02:16.617 CC module/event/subsystems/ublk/ublk.o 00:02:16.874 LIB libspdk_event_nbd.a 00:02:16.874 LIB libspdk_event_ublk.a 00:02:16.874 LIB libspdk_event_scsi.a 00:02:16.874 SO libspdk_event_nbd.so.6.0 00:02:16.874 SO libspdk_event_ublk.so.3.0 00:02:16.874 SO libspdk_event_scsi.so.6.0 00:02:16.874 SYMLINK libspdk_event_nbd.so 00:02:16.874 SYMLINK libspdk_event_ublk.so 00:02:16.874 SYMLINK libspdk_event_scsi.so 00:02:16.874 LIB libspdk_event_nvmf.a 00:02:16.874 SO libspdk_event_nvmf.so.6.0 00:02:17.132 SYMLINK libspdk_event_nvmf.so 00:02:17.132 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.132 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.132 LIB libspdk_event_vhost_scsi.a 00:02:17.132 LIB libspdk_event_iscsi.a 00:02:17.132 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.391 SO libspdk_event_iscsi.so.6.0 00:02:17.391 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.391 SYMLINK libspdk_event_iscsi.so 00:02:17.391 SO libspdk.so.6.0 00:02:17.391 SYMLINK libspdk.so 00:02:17.653 CXX app/trace/trace.o 00:02:17.653 CC app/spdk_top/spdk_top.o 00:02:17.653 CC app/spdk_nvme_identify/identify.o 00:02:17.653 TEST_HEADER include/spdk/accel.h 00:02:17.653 TEST_HEADER include/spdk/accel_module.h 00:02:17.653 CC test/rpc_client/rpc_client_test.o 00:02:17.653 TEST_HEADER include/spdk/barrier.h 00:02:17.653 TEST_HEADER include/spdk/assert.h 00:02:17.653 CC app/spdk_nvme_perf/perf.o 00:02:17.653 CC app/trace_record/trace_record.o 00:02:17.653 TEST_HEADER include/spdk/base64.h 00:02:17.653 TEST_HEADER include/spdk/bdev.h 00:02:17.653 TEST_HEADER include/spdk/bdev_module.h 00:02:17.653 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.653 TEST_HEADER include/spdk/bit_array.h 00:02:17.653 CC app/spdk_lspci/spdk_lspci.o 00:02:17.653 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.653 TEST_HEADER include/spdk/bit_pool.h 00:02:17.653 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.653 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.653 TEST_HEADER include/spdk/blobfs.h 00:02:17.653 TEST_HEADER include/spdk/blob.h 00:02:17.653 TEST_HEADER include/spdk/conf.h 00:02:17.653 TEST_HEADER include/spdk/config.h 00:02:17.653 TEST_HEADER include/spdk/cpuset.h 00:02:17.653 TEST_HEADER include/spdk/crc16.h 00:02:17.653 TEST_HEADER include/spdk/crc32.h 00:02:17.653 TEST_HEADER include/spdk/crc64.h 00:02:17.653 TEST_HEADER include/spdk/dif.h 00:02:17.653 TEST_HEADER include/spdk/dma.h 00:02:17.653 TEST_HEADER include/spdk/endian.h 00:02:17.653 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.653 TEST_HEADER include/spdk/env.h 00:02:17.653 TEST_HEADER include/spdk/event.h 00:02:17.653 TEST_HEADER include/spdk/fd_group.h 00:02:17.653 TEST_HEADER include/spdk/fd.h 00:02:17.653 TEST_HEADER include/spdk/ftl.h 00:02:17.653 TEST_HEADER include/spdk/file.h 00:02:17.653 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.653 TEST_HEADER include/spdk/hexlify.h 00:02:17.653 TEST_HEADER include/spdk/histogram_data.h 00:02:17.653 TEST_HEADER include/spdk/idxd.h 00:02:17.653 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.653 TEST_HEADER include/spdk/init.h 00:02:17.653 TEST_HEADER include/spdk/ioat.h 00:02:17.653 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.653 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.653 TEST_HEADER include/spdk/json.h 00:02:17.653 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.653 TEST_HEADER include/spdk/keyring.h 00:02:17.653 TEST_HEADER include/spdk/keyring_module.h 00:02:17.653 TEST_HEADER include/spdk/likely.h 00:02:17.653 TEST_HEADER include/spdk/log.h 00:02:17.653 TEST_HEADER include/spdk/lvol.h 00:02:17.653 TEST_HEADER include/spdk/mmio.h 00:02:17.653 TEST_HEADER include/spdk/memory.h 00:02:17.653 TEST_HEADER include/spdk/nbd.h 00:02:17.653 TEST_HEADER include/spdk/nvme.h 00:02:17.653 TEST_HEADER include/spdk/notify.h 00:02:17.653 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.653 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.653 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.653 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.653 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.653 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.653 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.653 TEST_HEADER include/spdk/nvmf.h 00:02:17.653 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.653 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.653 TEST_HEADER include/spdk/opal.h 00:02:17.653 TEST_HEADER include/spdk/opal_spec.h 00:02:17.653 TEST_HEADER include/spdk/pci_ids.h 00:02:17.653 TEST_HEADER include/spdk/pipe.h 00:02:17.653 TEST_HEADER include/spdk/queue.h 00:02:17.653 TEST_HEADER include/spdk/reduce.h 00:02:17.653 TEST_HEADER include/spdk/rpc.h 00:02:17.653 TEST_HEADER include/spdk/scheduler.h 00:02:17.653 TEST_HEADER include/spdk/scsi.h 00:02:17.653 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.653 TEST_HEADER include/spdk/sock.h 00:02:17.653 TEST_HEADER include/spdk/stdinc.h 00:02:17.653 TEST_HEADER include/spdk/string.h 00:02:17.653 TEST_HEADER include/spdk/thread.h 00:02:17.653 TEST_HEADER include/spdk/trace.h 00:02:17.653 TEST_HEADER include/spdk/trace_parser.h 00:02:17.653 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.653 TEST_HEADER include/spdk/tree.h 00:02:17.653 TEST_HEADER include/spdk/ublk.h 00:02:17.653 TEST_HEADER include/spdk/util.h 00:02:17.653 TEST_HEADER include/spdk/uuid.h 00:02:17.653 TEST_HEADER include/spdk/version.h 00:02:17.653 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.653 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.653 TEST_HEADER include/spdk/vhost.h 00:02:17.653 CC app/spdk_dd/spdk_dd.o 00:02:17.653 TEST_HEADER include/spdk/vmd.h 00:02:17.653 TEST_HEADER include/spdk/xor.h 00:02:17.653 TEST_HEADER include/spdk/zipf.h 00:02:17.653 CXX test/cpp_headers/accel.o 00:02:17.653 CXX test/cpp_headers/accel_module.o 00:02:17.653 CXX test/cpp_headers/assert.o 00:02:17.653 CXX test/cpp_headers/barrier.o 00:02:17.653 CXX test/cpp_headers/base64.o 00:02:17.653 CXX test/cpp_headers/bdev.o 00:02:17.653 CXX test/cpp_headers/bdev_module.o 00:02:17.653 CXX test/cpp_headers/bdev_zone.o 00:02:17.653 CXX test/cpp_headers/bit_array.o 00:02:17.653 CXX test/cpp_headers/bit_pool.o 00:02:17.653 CXX test/cpp_headers/blob_bdev.o 00:02:17.653 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.653 CXX test/cpp_headers/blobfs.o 00:02:17.653 CXX test/cpp_headers/blob.o 00:02:17.653 CXX test/cpp_headers/conf.o 00:02:17.653 CXX test/cpp_headers/config.o 00:02:17.653 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.653 CXX test/cpp_headers/cpuset.o 00:02:17.653 CXX test/cpp_headers/crc16.o 00:02:17.653 CC app/nvmf_tgt/nvmf_main.o 00:02:17.653 CC examples/ioat/verify/verify.o 00:02:17.653 CC app/spdk_tgt/spdk_tgt.o 00:02:17.653 CC examples/ioat/perf/perf.o 00:02:17.653 CXX test/cpp_headers/crc32.o 00:02:17.653 CC test/app/histogram_perf/histogram_perf.o 00:02:17.653 CC test/app/jsoncat/jsoncat.o 00:02:17.653 CC test/thread/poller_perf/poller_perf.o 00:02:17.653 CC test/env/memory/memory_ut.o 00:02:17.653 CC app/fio/nvme/fio_plugin.o 00:02:17.653 CC examples/util/zipf/zipf.o 00:02:17.653 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.653 CC test/env/vtophys/vtophys.o 00:02:17.653 CC test/app/stub/stub.o 00:02:17.653 CC test/env/pci/pci_ut.o 00:02:17.913 CC test/app/bdev_svc/bdev_svc.o 00:02:17.913 CC test/dma/test_dma/test_dma.o 00:02:17.913 CC app/fio/bdev/fio_plugin.o 00:02:17.913 LINK spdk_lspci 00:02:17.913 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.913 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.913 LINK rpc_client_test 00:02:18.179 LINK histogram_perf 00:02:18.179 LINK jsoncat 00:02:18.179 LINK spdk_nvme_discover 00:02:18.179 CXX test/cpp_headers/crc64.o 00:02:18.179 CXX test/cpp_headers/dif.o 00:02:18.179 LINK interrupt_tgt 00:02:18.179 LINK poller_perf 00:02:18.179 LINK vtophys 00:02:18.179 LINK nvmf_tgt 00:02:18.179 CXX test/cpp_headers/dma.o 00:02:18.179 LINK env_dpdk_post_init 00:02:18.179 CXX test/cpp_headers/endian.o 00:02:18.179 CXX test/cpp_headers/env_dpdk.o 00:02:18.179 CXX test/cpp_headers/env.o 00:02:18.179 CXX test/cpp_headers/event.o 00:02:18.179 CXX test/cpp_headers/fd_group.o 00:02:18.179 LINK zipf 00:02:18.179 CXX test/cpp_headers/fd.o 00:02:18.179 CXX test/cpp_headers/file.o 00:02:18.179 CXX test/cpp_headers/ftl.o 00:02:18.179 CXX test/cpp_headers/gpt_spec.o 00:02:18.179 LINK iscsi_tgt 00:02:18.179 LINK stub 00:02:18.179 CXX test/cpp_headers/hexlify.o 00:02:18.179 LINK spdk_tgt 00:02:18.179 CXX test/cpp_headers/histogram_data.o 00:02:18.179 CXX test/cpp_headers/idxd.o 00:02:18.179 CXX test/cpp_headers/idxd_spec.o 00:02:18.179 LINK spdk_trace_record 00:02:18.179 LINK verify 00:02:18.179 CXX test/cpp_headers/init.o 00:02:18.179 LINK bdev_svc 00:02:18.179 LINK ioat_perf 00:02:18.179 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.179 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.439 CXX test/cpp_headers/ioat.o 00:02:18.439 CXX test/cpp_headers/ioat_spec.o 00:02:18.439 CXX test/cpp_headers/iscsi_spec.o 00:02:18.439 CXX test/cpp_headers/json.o 00:02:18.439 CXX test/cpp_headers/jsonrpc.o 00:02:18.439 CXX test/cpp_headers/keyring.o 00:02:18.439 LINK spdk_dd 00:02:18.439 CXX test/cpp_headers/keyring_module.o 00:02:18.439 CXX test/cpp_headers/likely.o 00:02:18.439 CXX test/cpp_headers/log.o 00:02:18.439 CXX test/cpp_headers/lvol.o 00:02:18.439 CXX test/cpp_headers/memory.o 00:02:18.705 CXX test/cpp_headers/mmio.o 00:02:18.705 CXX test/cpp_headers/nbd.o 00:02:18.705 CXX test/cpp_headers/notify.o 00:02:18.706 CXX test/cpp_headers/nvme.o 00:02:18.706 CXX test/cpp_headers/nvme_intel.o 00:02:18.706 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.706 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.706 CXX test/cpp_headers/nvme_spec.o 00:02:18.706 CXX test/cpp_headers/nvme_zns.o 00:02:18.706 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.706 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.706 CXX test/cpp_headers/nvmf.o 00:02:18.706 CXX test/cpp_headers/nvmf_spec.o 00:02:18.706 CXX test/cpp_headers/nvmf_transport.o 00:02:18.706 LINK spdk_trace 00:02:18.706 CXX test/cpp_headers/opal.o 00:02:18.706 CXX test/cpp_headers/opal_spec.o 00:02:18.706 CXX test/cpp_headers/pci_ids.o 00:02:18.706 LINK pci_ut 00:02:18.706 LINK test_dma 00:02:18.706 CC test/event/event_perf/event_perf.o 00:02:18.706 CXX test/cpp_headers/pipe.o 00:02:18.706 CXX test/cpp_headers/queue.o 00:02:18.965 CC examples/sock/hello_world/hello_sock.o 00:02:18.965 CXX test/cpp_headers/reduce.o 00:02:18.965 CC test/event/reactor/reactor.o 00:02:18.965 CC test/event/reactor_perf/reactor_perf.o 00:02:18.965 CC examples/idxd/perf/perf.o 00:02:18.965 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.965 CXX test/cpp_headers/rpc.o 00:02:18.965 CXX test/cpp_headers/scheduler.o 00:02:18.965 CXX test/cpp_headers/scsi.o 00:02:18.965 CC examples/thread/thread/thread_ex.o 00:02:18.965 CXX test/cpp_headers/scsi_spec.o 00:02:18.965 CC test/event/app_repeat/app_repeat.o 00:02:18.965 CC examples/vmd/led/led.o 00:02:18.965 CXX test/cpp_headers/sock.o 00:02:18.965 CXX test/cpp_headers/stdinc.o 00:02:18.965 CXX test/cpp_headers/string.o 00:02:18.965 CXX test/cpp_headers/thread.o 00:02:18.965 CC test/event/scheduler/scheduler.o 00:02:18.965 LINK nvme_fuzz 00:02:18.965 CXX test/cpp_headers/trace.o 00:02:18.965 CXX test/cpp_headers/trace_parser.o 00:02:18.965 LINK spdk_bdev 00:02:18.965 CXX test/cpp_headers/tree.o 00:02:18.965 CXX test/cpp_headers/ublk.o 00:02:18.965 CXX test/cpp_headers/util.o 00:02:18.965 CXX test/cpp_headers/uuid.o 00:02:18.965 CXX test/cpp_headers/version.o 00:02:18.965 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.965 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.227 CXX test/cpp_headers/vhost.o 00:02:19.227 CXX test/cpp_headers/vmd.o 00:02:19.227 CXX test/cpp_headers/xor.o 00:02:19.227 LINK mem_callbacks 00:02:19.227 CXX test/cpp_headers/zipf.o 00:02:19.227 LINK event_perf 00:02:19.227 LINK reactor 00:02:19.227 LINK reactor_perf 00:02:19.227 LINK spdk_nvme 00:02:19.227 LINK lsvmd 00:02:19.227 CC app/vhost/vhost.o 00:02:19.227 LINK vhost_fuzz 00:02:19.227 LINK app_repeat 00:02:19.227 LINK led 00:02:19.486 LINK thread 00:02:19.486 LINK hello_sock 00:02:19.486 CC test/nvme/aer/aer.o 00:02:19.486 CC test/nvme/sgl/sgl.o 00:02:19.486 CC test/nvme/reset/reset.o 00:02:19.486 CC test/nvme/err_injection/err_injection.o 00:02:19.486 CC test/nvme/reserve/reserve.o 00:02:19.486 CC test/nvme/startup/startup.o 00:02:19.486 CC test/nvme/simple_copy/simple_copy.o 00:02:19.486 CC test/nvme/e2edp/nvme_dp.o 00:02:19.486 CC test/nvme/overhead/overhead.o 00:02:19.486 LINK scheduler 00:02:19.486 CC test/nvme/connect_stress/connect_stress.o 00:02:19.486 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.486 CC test/nvme/compliance/nvme_compliance.o 00:02:19.486 CC test/nvme/boot_partition/boot_partition.o 00:02:19.486 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.486 CC test/nvme/cuse/cuse.o 00:02:19.486 CC test/nvme/fdp/fdp.o 00:02:19.486 CC test/accel/dif/dif.o 00:02:19.486 CC test/blobfs/mkfs/mkfs.o 00:02:19.486 LINK vhost 00:02:19.486 CC test/lvol/esnap/esnap.o 00:02:19.486 LINK spdk_nvme_perf 00:02:19.744 LINK idxd_perf 00:02:19.744 LINK spdk_top 00:02:19.744 LINK boot_partition 00:02:19.744 LINK startup 00:02:19.744 LINK doorbell_aers 00:02:19.744 LINK spdk_nvme_identify 00:02:19.744 LINK fused_ordering 00:02:19.744 LINK mkfs 00:02:19.744 CC examples/nvme/reconnect/reconnect.o 00:02:19.744 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.744 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.744 LINK reserve 00:02:19.744 CC examples/nvme/hello_world/hello_world.o 00:02:19.744 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.744 LINK sgl 00:02:19.744 CC examples/nvme/abort/abort.o 00:02:19.744 CC examples/nvme/arbitration/arbitration.o 00:02:19.744 CC examples/nvme/hotplug/hotplug.o 00:02:19.744 LINK err_injection 00:02:20.003 LINK connect_stress 00:02:20.003 LINK reset 00:02:20.003 LINK memory_ut 00:02:20.003 LINK simple_copy 00:02:20.003 LINK nvme_compliance 00:02:20.003 LINK aer 00:02:20.003 CC examples/accel/perf/accel_perf.o 00:02:20.003 LINK nvme_dp 00:02:20.003 CC examples/blob/hello_world/hello_blob.o 00:02:20.003 CC examples/blob/cli/blobcli.o 00:02:20.003 LINK overhead 00:02:20.003 LINK fdp 00:02:20.262 LINK hello_world 00:02:20.262 LINK cmb_copy 00:02:20.262 LINK pmr_persistence 00:02:20.262 LINK dif 00:02:20.262 LINK hotplug 00:02:20.262 LINK reconnect 00:02:20.521 LINK hello_blob 00:02:20.521 LINK arbitration 00:02:20.521 LINK abort 00:02:20.521 LINK nvme_manage 00:02:20.779 CC test/bdev/bdevio/bdevio.o 00:02:20.779 LINK accel_perf 00:02:20.779 LINK blobcli 00:02:21.038 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.038 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.038 LINK bdevio 00:02:21.296 LINK hello_bdev 00:02:21.296 LINK cuse 00:02:21.296 LINK iscsi_fuzz 00:02:21.864 LINK bdevperf 00:02:22.430 CC examples/nvmf/nvmf/nvmf.o 00:02:22.688 LINK nvmf 00:02:26.873 LINK esnap 00:02:26.874 00:02:26.874 real 1m15.877s 00:02:26.874 user 11m18.674s 00:02:26.874 sys 2m27.017s 00:02:26.874 04:51:32 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:26.874 04:51:32 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.874 ************************************ 00:02:26.874 END TEST make 00:02:26.874 ************************************ 00:02:26.874 04:51:32 -- common/autotest_common.sh@1142 -- $ return 0 00:02:26.874 04:51:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.874 04:51:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.874 04:51:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.874 04:51:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.874 04:51:32 -- pm/common@44 -- $ pid=461478 00:02:26.874 04:51:32 -- pm/common@50 -- $ kill -TERM 461478 00:02:26.874 04:51:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.874 04:51:32 -- pm/common@44 -- $ pid=461479 00:02:26.874 04:51:32 -- pm/common@50 -- $ kill -TERM 461479 00:02:26.874 04:51:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.874 04:51:32 -- pm/common@44 -- $ pid=461481 00:02:26.874 04:51:32 -- pm/common@50 -- $ kill -TERM 461481 00:02:26.874 04:51:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.874 04:51:32 -- pm/common@44 -- $ pid=461511 00:02:26.874 04:51:32 -- pm/common@50 -- $ sudo -E kill -TERM 461511 00:02:26.874 04:51:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.874 04:51:33 -- nvmf/common.sh@7 -- # uname -s 00:02:26.874 04:51:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.874 04:51:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.874 04:51:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.874 04:51:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.874 04:51:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.874 04:51:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.874 04:51:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.874 04:51:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.874 04:51:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.874 04:51:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.874 04:51:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:26.874 04:51:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:26.874 04:51:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.874 04:51:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.874 04:51:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:26.874 04:51:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.874 04:51:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.874 04:51:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.874 04:51:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.874 04:51:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.874 04:51:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.874 04:51:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.874 04:51:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.874 04:51:33 -- paths/export.sh@5 -- # export PATH 00:02:26.874 04:51:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.874 04:51:33 -- nvmf/common.sh@47 -- # : 0 00:02:26.874 04:51:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:26.874 04:51:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:26.874 04:51:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.874 04:51:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.874 04:51:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.874 04:51:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:26.874 04:51:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:26.874 04:51:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:26.874 04:51:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.874 04:51:33 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.874 04:51:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.874 04:51:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.874 04:51:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.874 04:51:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.874 04:51:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.874 04:51:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.874 04:51:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.874 04:51:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.874 04:51:33 -- spdk/autotest.sh@48 -- # udevadm_pid=520265 00:02:26.874 04:51:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.874 04:51:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.874 04:51:33 -- pm/common@17 -- # local monitor 00:02:26.874 04:51:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:33 -- pm/common@21 -- # date +%s 00:02:26.874 04:51:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.874 04:51:33 -- pm/common@21 -- # date +%s 00:02:26.874 04:51:33 -- pm/common@25 -- # sleep 1 00:02:26.874 04:51:33 -- pm/common@21 -- # date +%s 00:02:26.874 04:51:33 -- pm/common@21 -- # date +%s 00:02:26.874 04:51:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720839093 00:02:26.874 04:51:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720839093 00:02:26.874 04:51:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720839093 00:02:26.874 04:51:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720839093 00:02:26.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720839093_collect-vmstat.pm.log 00:02:26.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720839093_collect-cpu-load.pm.log 00:02:26.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720839093_collect-cpu-temp.pm.log 00:02:26.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720839093_collect-bmc-pm.bmc.pm.log 00:02:27.809 04:51:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.809 04:51:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.809 04:51:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:27.809 04:51:34 -- common/autotest_common.sh@10 -- # set +x 00:02:27.809 04:51:34 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.809 04:51:34 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:27.809 04:51:34 -- common/autotest_common.sh@10 -- # set +x 00:02:27.809 04:51:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:27.809 04:51:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.809 04:51:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.809 04:51:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:27.809 04:51:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.809 04:51:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.809 04:51:34 -- common/autotest_common.sh@1455 -- # uname 00:02:27.809 04:51:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.809 04:51:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.809 04:51:34 -- common/autotest_common.sh@1475 -- # uname 00:02:27.809 04:51:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.809 04:51:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.809 04:51:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:27.809 04:51:34 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.809 04:51:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:27.809 04:51:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:27.809 --rc lcov_branch_coverage=1 00:02:27.809 --rc lcov_function_coverage=1 00:02:27.809 --rc genhtml_branch_coverage=1 00:02:27.809 --rc genhtml_function_coverage=1 00:02:27.809 --rc genhtml_legend=1 00:02:27.809 --rc geninfo_all_blocks=1 00:02:27.809 ' 00:02:27.809 04:51:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:27.809 --rc lcov_branch_coverage=1 00:02:27.809 --rc lcov_function_coverage=1 00:02:27.809 --rc genhtml_branch_coverage=1 00:02:27.809 --rc genhtml_function_coverage=1 00:02:27.809 --rc genhtml_legend=1 00:02:27.809 --rc geninfo_all_blocks=1 00:02:27.809 ' 00:02:27.809 04:51:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:27.809 --rc lcov_branch_coverage=1 00:02:27.809 --rc lcov_function_coverage=1 00:02:27.809 --rc genhtml_branch_coverage=1 00:02:27.809 --rc genhtml_function_coverage=1 00:02:27.809 --rc genhtml_legend=1 00:02:27.809 --rc geninfo_all_blocks=1 00:02:27.809 --no-external' 00:02:27.809 04:51:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:27.809 --rc lcov_branch_coverage=1 00:02:27.809 --rc lcov_function_coverage=1 00:02:27.809 --rc genhtml_branch_coverage=1 00:02:27.809 --rc genhtml_function_coverage=1 00:02:27.809 --rc genhtml_legend=1 00:02:27.809 --rc geninfo_all_blocks=1 00:02:27.809 --no-external' 00:02:27.809 04:51:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:27.809 lcov: LCOV version 1.14 00:02:27.809 04:51:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:34.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:34.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:34.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:34.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:06.338 04:52:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:06.338 04:52:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.338 04:52:12 -- common/autotest_common.sh@10 -- # set +x 00:03:06.338 04:52:12 -- spdk/autotest.sh@91 -- # rm -f 00:03:06.338 04:52:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.271 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:07.271 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:07.271 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:07.271 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:07.271 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:07.532 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:07.532 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:07.532 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:07.532 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:07.532 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:07.532 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:07.532 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:07.532 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:07.532 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:07.532 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:07.532 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:07.532 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:07.532 04:52:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.532 04:52:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:07.532 04:52:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:07.532 04:52:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:07.532 04:52:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.532 04:52:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:07.532 04:52:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:07.532 04:52:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.532 04:52:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.532 04:52:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.533 04:52:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.533 04:52:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.533 04:52:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.533 04:52:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.533 04:52:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.791 No valid GPT data, bailing 00:03:07.791 04:52:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.791 04:52:14 -- scripts/common.sh@391 -- # pt= 00:03:07.791 04:52:14 -- scripts/common.sh@392 -- # return 1 00:03:07.791 04:52:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.791 1+0 records in 00:03:07.791 1+0 records out 00:03:07.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0027379 s, 383 MB/s 00:03:07.791 04:52:14 -- spdk/autotest.sh@118 -- # sync 00:03:07.792 04:52:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.792 04:52:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.792 04:52:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:09.696 04:52:15 -- spdk/autotest.sh@124 -- # uname -s 00:03:09.696 04:52:15 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:09.696 04:52:15 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.696 04:52:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.696 04:52:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.696 04:52:15 -- common/autotest_common.sh@10 -- # set +x 00:03:09.696 ************************************ 00:03:09.696 START TEST setup.sh 00:03:09.696 ************************************ 00:03:09.696 04:52:15 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.696 * Looking for test storage... 00:03:09.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.696 04:52:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:09.696 04:52:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:09.696 04:52:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.696 04:52:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.696 04:52:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.696 04:52:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.696 ************************************ 00:03:09.696 START TEST acl 00:03:09.696 ************************************ 00:03:09.696 04:52:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.696 * Looking for test storage... 00:03:09.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.696 04:52:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:09.696 04:52:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:09.696 04:52:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.696 04:52:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.069 04:52:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:11.069 04:52:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:11.069 04:52:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.069 04:52:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:11.069 04:52:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.069 04:52:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:12.475 Hugepages 00:03:12.475 node hugesize free / total 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 00:03:12.475 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:12.475 04:52:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:12.475 04:52:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.475 04:52:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.475 04:52:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:12.475 ************************************ 00:03:12.475 START TEST denied 00:03:12.475 ************************************ 00:03:12.475 04:52:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:12.475 04:52:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:12.475 04:52:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:12.475 04:52:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:12.475 04:52:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.475 04:52:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.847 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:13.847 04:52:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:13.847 04:52:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:13.847 04:52:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:13.847 04:52:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.848 04:52:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.378 00:03:16.378 real 0m3.647s 00:03:16.378 user 0m1.054s 00:03:16.378 sys 0m1.675s 00:03:16.378 04:52:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.378 04:52:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:16.378 ************************************ 00:03:16.378 END TEST denied 00:03:16.378 ************************************ 00:03:16.378 04:52:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:16.378 04:52:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:16.378 04:52:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.378 04:52:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.378 04:52:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.378 ************************************ 00:03:16.378 START TEST allowed 00:03:16.378 ************************************ 00:03:16.378 04:52:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:16.379 04:52:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:16.379 04:52:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:16.379 04:52:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:16.379 04:52:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.379 04:52:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.279 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.279 04:52:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:18.279 04:52:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:18.279 04:52:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:18.279 04:52:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.279 04:52:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.180 00:03:20.180 real 0m3.854s 00:03:20.180 user 0m1.003s 00:03:20.180 sys 0m1.693s 00:03:20.180 04:52:26 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.180 04:52:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:20.180 ************************************ 00:03:20.180 END TEST allowed 00:03:20.180 ************************************ 00:03:20.180 04:52:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:20.180 00:03:20.180 real 0m10.285s 00:03:20.180 user 0m3.191s 00:03:20.180 sys 0m5.086s 00:03:20.180 04:52:26 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.180 04:52:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.180 ************************************ 00:03:20.180 END TEST acl 00:03:20.180 ************************************ 00:03:20.180 04:52:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.180 04:52:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.180 04:52:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.180 04:52:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.180 04:52:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.180 ************************************ 00:03:20.180 START TEST hugepages 00:03:20.180 ************************************ 00:03:20.180 04:52:26 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.180 * Looking for test storage... 00:03:20.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43516440 kB' 'MemAvailable: 47019292 kB' 'Buffers: 2704 kB' 'Cached: 10497360 kB' 'SwapCached: 0 kB' 'Active: 7491960 kB' 'Inactive: 3506552 kB' 'Active(anon): 7097608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501592 kB' 'Mapped: 190256 kB' 'Shmem: 6599160 kB' 'KReclaimable: 191140 kB' 'Slab: 556904 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365764 kB' 'KernelStack: 12832 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 8220488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.180 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.181 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.182 04:52:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.182 04:52:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.182 04:52:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.182 04:52:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.182 ************************************ 00:03:20.182 START TEST default_setup 00:03:20.182 ************************************ 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.182 04:52:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.131 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.131 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.131 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.131 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.131 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.131 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.388 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.388 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.388 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:22.327 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45599300 kB' 'MemAvailable: 49102152 kB' 'Buffers: 2704 kB' 'Cached: 10497448 kB' 'SwapCached: 0 kB' 'Active: 7511536 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117184 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521324 kB' 'Mapped: 190360 kB' 'Shmem: 6599248 kB' 'KReclaimable: 191140 kB' 'Slab: 556648 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365508 kB' 'KernelStack: 12816 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8240716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.328 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45606208 kB' 'MemAvailable: 49109060 kB' 'Buffers: 2704 kB' 'Cached: 10497452 kB' 'SwapCached: 0 kB' 'Active: 7511372 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521124 kB' 'Mapped: 190360 kB' 'Shmem: 6599252 kB' 'KReclaimable: 191140 kB' 'Slab: 556736 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365596 kB' 'KernelStack: 12832 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8240868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.329 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.330 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45608528 kB' 'MemAvailable: 49111380 kB' 'Buffers: 2704 kB' 'Cached: 10497472 kB' 'SwapCached: 0 kB' 'Active: 7510852 kB' 'Inactive: 3506552 kB' 'Active(anon): 7116500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520528 kB' 'Mapped: 190360 kB' 'Shmem: 6599272 kB' 'KReclaimable: 191140 kB' 'Slab: 556736 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365596 kB' 'KernelStack: 12752 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.331 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.332 nr_hugepages=1024 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.332 resv_hugepages=0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.332 surplus_hugepages=0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.332 anon_hugepages=0 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.332 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45607436 kB' 'MemAvailable: 49110288 kB' 'Buffers: 2704 kB' 'Cached: 10497496 kB' 'SwapCached: 0 kB' 'Active: 7510820 kB' 'Inactive: 3506552 kB' 'Active(anon): 7116468 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520448 kB' 'Mapped: 190280 kB' 'Shmem: 6599296 kB' 'KReclaimable: 191140 kB' 'Slab: 556772 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365632 kB' 'KernelStack: 12816 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.333 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.594 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27092208 kB' 'MemUsed: 5737676 kB' 'SwapCached: 0 kB' 'Active: 2372396 kB' 'Inactive: 108696 kB' 'Active(anon): 2261508 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251164 kB' 'Mapped: 28272 kB' 'AnonPages: 233092 kB' 'Shmem: 2031580 kB' 'KernelStack: 7560 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314640 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.595 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.596 node0=1024 expecting 1024 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.596 00:03:22.596 real 0m2.438s 00:03:22.596 user 0m0.641s 00:03:22.596 sys 0m0.907s 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.596 04:52:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:22.596 ************************************ 00:03:22.596 END TEST default_setup 00:03:22.596 ************************************ 00:03:22.596 04:52:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.596 04:52:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.597 04:52:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.597 04:52:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.597 04:52:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.597 ************************************ 00:03:22.597 START TEST per_node_1G_alloc 00:03:22.597 ************************************ 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.597 04:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.532 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.532 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.532 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.532 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.532 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.532 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.532 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.532 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.532 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.532 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.532 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.532 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.532 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.532 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.532 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.532 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.532 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.798 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45587012 kB' 'MemAvailable: 49089864 kB' 'Buffers: 2704 kB' 'Cached: 10497568 kB' 'SwapCached: 0 kB' 'Active: 7511080 kB' 'Inactive: 3506552 kB' 'Active(anon): 7116728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520620 kB' 'Mapped: 190432 kB' 'Shmem: 6599368 kB' 'KReclaimable: 191140 kB' 'Slab: 556736 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365596 kB' 'KernelStack: 12864 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.799 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45589628 kB' 'MemAvailable: 49092480 kB' 'Buffers: 2704 kB' 'Cached: 10497568 kB' 'SwapCached: 0 kB' 'Active: 7511644 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521136 kB' 'Mapped: 190368 kB' 'Shmem: 6599368 kB' 'KReclaimable: 191140 kB' 'Slab: 556720 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365580 kB' 'KernelStack: 12880 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.800 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.801 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45590244 kB' 'MemAvailable: 49093096 kB' 'Buffers: 2704 kB' 'Cached: 10497572 kB' 'SwapCached: 0 kB' 'Active: 7510948 kB' 'Inactive: 3506552 kB' 'Active(anon): 7116596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520436 kB' 'Mapped: 190292 kB' 'Shmem: 6599372 kB' 'KReclaimable: 191140 kB' 'Slab: 556732 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365592 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.802 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.803 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.804 nr_hugepages=1024 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.804 resv_hugepages=0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.804 surplus_hugepages=0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.804 anon_hugepages=0 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45590852 kB' 'MemAvailable: 49093704 kB' 'Buffers: 2704 kB' 'Cached: 10497612 kB' 'SwapCached: 0 kB' 'Active: 7511256 kB' 'Inactive: 3506552 kB' 'Active(anon): 7116904 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520704 kB' 'Mapped: 190292 kB' 'Shmem: 6599412 kB' 'KReclaimable: 191140 kB' 'Slab: 556732 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365592 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.804 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.805 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28138656 kB' 'MemUsed: 4691228 kB' 'SwapCached: 0 kB' 'Active: 2372600 kB' 'Inactive: 108696 kB' 'Active(anon): 2261712 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251164 kB' 'Mapped: 28272 kB' 'AnonPages: 233252 kB' 'Shmem: 2031580 kB' 'KernelStack: 7576 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314644 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17452284 kB' 'MemUsed: 10259540 kB' 'SwapCached: 0 kB' 'Active: 5138672 kB' 'Inactive: 3397856 kB' 'Active(anon): 4855208 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8249196 kB' 'Mapped: 162020 kB' 'AnonPages: 287452 kB' 'Shmem: 4567876 kB' 'KernelStack: 5304 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98804 kB' 'Slab: 242088 kB' 'SReclaimable: 98804 kB' 'SUnreclaim: 143284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.809 node0=512 expecting 512 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.809 node1=512 expecting 512 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.809 00:03:23.809 real 0m1.373s 00:03:23.809 user 0m0.570s 00:03:23.809 sys 0m0.765s 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.809 04:52:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.809 ************************************ 00:03:23.809 END TEST per_node_1G_alloc 00:03:23.809 ************************************ 00:03:24.067 04:52:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.067 04:52:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.067 04:52:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.067 04:52:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.067 04:52:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.067 ************************************ 00:03:24.067 START TEST even_2G_alloc 00:03:24.067 ************************************ 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.067 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.068 04:52:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.003 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.003 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.003 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.003 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.003 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.003 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.003 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.003 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.003 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.003 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.003 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.003 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.003 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.003 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.003 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.003 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.003 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45600484 kB' 'MemAvailable: 49103336 kB' 'Buffers: 2704 kB' 'Cached: 10497700 kB' 'SwapCached: 0 kB' 'Active: 7511748 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117396 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521096 kB' 'Mapped: 190508 kB' 'Shmem: 6599500 kB' 'KReclaimable: 191140 kB' 'Slab: 556908 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365768 kB' 'KernelStack: 12848 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.268 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.269 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45600356 kB' 'MemAvailable: 49103208 kB' 'Buffers: 2704 kB' 'Cached: 10497704 kB' 'SwapCached: 0 kB' 'Active: 7511592 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520920 kB' 'Mapped: 190444 kB' 'Shmem: 6599504 kB' 'KReclaimable: 191140 kB' 'Slab: 556836 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365696 kB' 'KernelStack: 12864 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8241612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.270 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45600724 kB' 'MemAvailable: 49103576 kB' 'Buffers: 2704 kB' 'Cached: 10497704 kB' 'SwapCached: 0 kB' 'Active: 7511556 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520904 kB' 'Mapped: 190804 kB' 'Shmem: 6599504 kB' 'KReclaimable: 191140 kB' 'Slab: 556860 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365720 kB' 'KernelStack: 12832 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8242988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.271 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.272 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.273 nr_hugepages=1024 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.273 resv_hugepages=0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.273 surplus_hugepages=0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.273 anon_hugepages=0 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45598464 kB' 'MemAvailable: 49101316 kB' 'Buffers: 2704 kB' 'Cached: 10497744 kB' 'SwapCached: 0 kB' 'Active: 7515452 kB' 'Inactive: 3506552 kB' 'Active(anon): 7121100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524744 kB' 'Mapped: 190744 kB' 'Shmem: 6599544 kB' 'KReclaimable: 191140 kB' 'Slab: 556860 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365720 kB' 'KernelStack: 12848 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8246444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.273 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.274 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28140620 kB' 'MemUsed: 4689264 kB' 'SwapCached: 0 kB' 'Active: 2372680 kB' 'Inactive: 108696 kB' 'Active(anon): 2261792 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251216 kB' 'Mapped: 28996 kB' 'AnonPages: 233288 kB' 'Shmem: 2031632 kB' 'KernelStack: 7576 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314724 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.275 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.276 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.586 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17455828 kB' 'MemUsed: 10255996 kB' 'SwapCached: 0 kB' 'Active: 5138452 kB' 'Inactive: 3397856 kB' 'Active(anon): 4854988 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8249232 kB' 'Mapped: 162196 kB' 'AnonPages: 287108 kB' 'Shmem: 4567912 kB' 'KernelStack: 5272 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98804 kB' 'Slab: 242128 kB' 'SReclaimable: 98804 kB' 'SUnreclaim: 143324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.587 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.588 node0=512 expecting 512 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.588 node1=512 expecting 512 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.588 00:03:25.588 real 0m1.449s 00:03:25.588 user 0m0.614s 00:03:25.588 sys 0m0.798s 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.588 04:52:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.588 ************************************ 00:03:25.588 END TEST even_2G_alloc 00:03:25.588 ************************************ 00:03:25.588 04:52:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.588 04:52:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:25.588 04:52:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.588 04:52:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.588 04:52:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.588 ************************************ 00:03:25.588 START TEST odd_alloc 00:03:25.588 ************************************ 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.588 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.589 04:52:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.525 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.525 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.525 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.525 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.525 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.525 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.525 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.525 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.525 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.525 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.525 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.525 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.525 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.525 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.525 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.525 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.525 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45616492 kB' 'MemAvailable: 49119344 kB' 'Buffers: 2704 kB' 'Cached: 10497836 kB' 'SwapCached: 0 kB' 'Active: 7509736 kB' 'Inactive: 3506552 kB' 'Active(anon): 7115384 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518624 kB' 'Mapped: 189596 kB' 'Shmem: 6599636 kB' 'KReclaimable: 191140 kB' 'Slab: 556284 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365144 kB' 'KernelStack: 12752 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8228564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45615960 kB' 'MemAvailable: 49118812 kB' 'Buffers: 2704 kB' 'Cached: 10497836 kB' 'SwapCached: 0 kB' 'Active: 7510204 kB' 'Inactive: 3506552 kB' 'Active(anon): 7115852 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519000 kB' 'Mapped: 189596 kB' 'Shmem: 6599636 kB' 'KReclaimable: 191140 kB' 'Slab: 556284 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365144 kB' 'KernelStack: 12864 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8228752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.791 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.792 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45613728 kB' 'MemAvailable: 49116580 kB' 'Buffers: 2704 kB' 'Cached: 10497840 kB' 'SwapCached: 0 kB' 'Active: 7509204 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114852 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518488 kB' 'Mapped: 189460 kB' 'Shmem: 6599640 kB' 'KReclaimable: 191140 kB' 'Slab: 556300 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365160 kB' 'KernelStack: 13312 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8230340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.793 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:26.794 nr_hugepages=1025 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.794 resv_hugepages=0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.794 surplus_hugepages=0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.794 anon_hugepages=0 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45613416 kB' 'MemAvailable: 49116268 kB' 'Buffers: 2704 kB' 'Cached: 10497880 kB' 'SwapCached: 0 kB' 'Active: 7509996 kB' 'Inactive: 3506552 kB' 'Active(anon): 7115644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519248 kB' 'Mapped: 189460 kB' 'Shmem: 6599680 kB' 'KReclaimable: 191140 kB' 'Slab: 556300 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365160 kB' 'KernelStack: 13296 kB' 'PageTables: 9876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8230180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.794 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.795 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28156200 kB' 'MemUsed: 4673684 kB' 'SwapCached: 0 kB' 'Active: 2369976 kB' 'Inactive: 108696 kB' 'Active(anon): 2259088 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251352 kB' 'Mapped: 27544 kB' 'AnonPages: 230516 kB' 'Shmem: 2031768 kB' 'KernelStack: 7512 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314544 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.796 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17454212 kB' 'MemUsed: 10257612 kB' 'SwapCached: 0 kB' 'Active: 5140364 kB' 'Inactive: 3397856 kB' 'Active(anon): 4856900 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8249252 kB' 'Mapped: 161916 kB' 'AnonPages: 289060 kB' 'Shmem: 4567932 kB' 'KernelStack: 5832 kB' 'PageTables: 5452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98804 kB' 'Slab: 241756 kB' 'SReclaimable: 98804 kB' 'SUnreclaim: 142952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.797 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:26.798 node0=512 expecting 513 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:26.798 node1=513 expecting 512 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:26.798 00:03:26.798 real 0m1.386s 00:03:26.798 user 0m0.585s 00:03:26.798 sys 0m0.762s 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.798 04:52:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.798 ************************************ 00:03:26.798 END TEST odd_alloc 00:03:26.798 ************************************ 00:03:26.798 04:52:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.798 04:52:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:26.798 04:52:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.798 04:52:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.798 04:52:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.798 ************************************ 00:03:26.798 START TEST custom_alloc 00:03:26.798 ************************************ 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.798 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.799 04:52:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.179 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.179 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.179 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.179 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.179 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.179 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.179 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.179 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.179 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.179 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.179 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.179 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.179 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.179 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.179 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.179 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.179 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.179 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.179 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.179 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.179 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.179 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44533672 kB' 'MemAvailable: 48036524 kB' 'Buffers: 2704 kB' 'Cached: 10497964 kB' 'SwapCached: 0 kB' 'Active: 7508516 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517592 kB' 'Mapped: 189528 kB' 'Shmem: 6599764 kB' 'KReclaimable: 191140 kB' 'Slab: 556272 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365132 kB' 'KernelStack: 12816 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8228192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.180 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44542204 kB' 'MemAvailable: 48045056 kB' 'Buffers: 2704 kB' 'Cached: 10497968 kB' 'SwapCached: 0 kB' 'Active: 7508432 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517488 kB' 'Mapped: 189468 kB' 'Shmem: 6599768 kB' 'KReclaimable: 191140 kB' 'Slab: 556288 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365148 kB' 'KernelStack: 12848 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8228212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.181 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.182 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44541196 kB' 'MemAvailable: 48044048 kB' 'Buffers: 2704 kB' 'Cached: 10497980 kB' 'SwapCached: 0 kB' 'Active: 7508236 kB' 'Inactive: 3506552 kB' 'Active(anon): 7113884 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517316 kB' 'Mapped: 189468 kB' 'Shmem: 6599780 kB' 'KReclaimable: 191140 kB' 'Slab: 556288 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365148 kB' 'KernelStack: 12848 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8228232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.183 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.184 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.185 nr_hugepages=1536 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.185 resv_hugepages=0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.185 surplus_hugepages=0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.185 anon_hugepages=0 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44541540 kB' 'MemAvailable: 48044392 kB' 'Buffers: 2704 kB' 'Cached: 10497980 kB' 'SwapCached: 0 kB' 'Active: 7508388 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517468 kB' 'Mapped: 189468 kB' 'Shmem: 6599780 kB' 'KReclaimable: 191140 kB' 'Slab: 556288 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365148 kB' 'KernelStack: 12848 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8228252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.185 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.186 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28144496 kB' 'MemUsed: 4685388 kB' 'SwapCached: 0 kB' 'Active: 2370136 kB' 'Inactive: 108696 kB' 'Active(anon): 2259248 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251476 kB' 'Mapped: 27544 kB' 'AnonPages: 230596 kB' 'Shmem: 2031892 kB' 'KernelStack: 7544 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314564 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.187 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16397120 kB' 'MemUsed: 11314704 kB' 'SwapCached: 0 kB' 'Active: 5138280 kB' 'Inactive: 3397856 kB' 'Active(anon): 4854816 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8249256 kB' 'Mapped: 161924 kB' 'AnonPages: 286904 kB' 'Shmem: 4567936 kB' 'KernelStack: 5320 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98804 kB' 'Slab: 241800 kB' 'SReclaimable: 98804 kB' 'SUnreclaim: 142996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.188 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.189 node0=512 expecting 512 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.189 node1=1024 expecting 1024 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.189 00:03:28.189 real 0m1.400s 00:03:28.189 user 0m0.587s 00:03:28.189 sys 0m0.774s 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.189 04:52:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.189 ************************************ 00:03:28.189 END TEST custom_alloc 00:03:28.189 ************************************ 00:03:28.448 04:52:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.448 04:52:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.448 04:52:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.448 04:52:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.448 04:52:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.448 ************************************ 00:03:28.448 START TEST no_shrink_alloc 00:03:28.448 ************************************ 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.448 04:52:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.384 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.384 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.384 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.384 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.384 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.384 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.384 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.384 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.384 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.384 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.384 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.384 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.384 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.384 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.384 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.384 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.384 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45563968 kB' 'MemAvailable: 49066820 kB' 'Buffers: 2704 kB' 'Cached: 10498092 kB' 'SwapCached: 0 kB' 'Active: 7508908 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114556 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517936 kB' 'Mapped: 189576 kB' 'Shmem: 6599892 kB' 'KReclaimable: 191140 kB' 'Slab: 556480 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365340 kB' 'KernelStack: 12848 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8228484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.648 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.649 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45563968 kB' 'MemAvailable: 49066820 kB' 'Buffers: 2704 kB' 'Cached: 10498092 kB' 'SwapCached: 0 kB' 'Active: 7508896 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114544 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517996 kB' 'Mapped: 189584 kB' 'Shmem: 6599892 kB' 'KReclaimable: 191140 kB' 'Slab: 556528 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365388 kB' 'KernelStack: 12832 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8228500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.650 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.651 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.652 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45566536 kB' 'MemAvailable: 49069388 kB' 'Buffers: 2704 kB' 'Cached: 10498112 kB' 'SwapCached: 0 kB' 'Active: 7508288 kB' 'Inactive: 3506552 kB' 'Active(anon): 7113936 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517292 kB' 'Mapped: 189504 kB' 'Shmem: 6599912 kB' 'KReclaimable: 191140 kB' 'Slab: 556524 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365384 kB' 'KernelStack: 12784 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8228524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.653 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.654 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.655 nr_hugepages=1024 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.655 resv_hugepages=0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.655 surplus_hugepages=0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.655 anon_hugepages=0 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45566968 kB' 'MemAvailable: 49069820 kB' 'Buffers: 2704 kB' 'Cached: 10498136 kB' 'SwapCached: 0 kB' 'Active: 7508592 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517612 kB' 'Mapped: 189504 kB' 'Shmem: 6599936 kB' 'KReclaimable: 191140 kB' 'Slab: 556524 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365384 kB' 'KernelStack: 12864 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8228544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.655 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.656 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.657 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27094064 kB' 'MemUsed: 5735820 kB' 'SwapCached: 0 kB' 'Active: 2369536 kB' 'Inactive: 108696 kB' 'Active(anon): 2258648 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251524 kB' 'Mapped: 27560 kB' 'AnonPages: 229856 kB' 'Shmem: 2031940 kB' 'KernelStack: 7512 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314596 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.658 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.659 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.660 node0=1024 expecting 1024 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.660 04:52:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.041 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.041 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.041 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.041 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.041 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.041 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.041 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.041 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.041 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.041 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.041 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.041 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.041 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.041 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.041 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.041 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.041 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.041 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45557296 kB' 'MemAvailable: 49060148 kB' 'Buffers: 2704 kB' 'Cached: 10498200 kB' 'SwapCached: 0 kB' 'Active: 7508864 kB' 'Inactive: 3506552 kB' 'Active(anon): 7114512 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517760 kB' 'Mapped: 189564 kB' 'Shmem: 6600000 kB' 'KReclaimable: 191140 kB' 'Slab: 556448 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365308 kB' 'KernelStack: 12832 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8228356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45556980 kB' 'MemAvailable: 49059832 kB' 'Buffers: 2704 kB' 'Cached: 10498200 kB' 'SwapCached: 0 kB' 'Active: 7509376 kB' 'Inactive: 3506552 kB' 'Active(anon): 7115024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518256 kB' 'Mapped: 190008 kB' 'Shmem: 6600000 kB' 'KReclaimable: 191140 kB' 'Slab: 556452 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365312 kB' 'KernelStack: 12784 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8229748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.044 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45554944 kB' 'MemAvailable: 49057796 kB' 'Buffers: 2704 kB' 'Cached: 10498208 kB' 'SwapCached: 0 kB' 'Active: 7512288 kB' 'Inactive: 3506552 kB' 'Active(anon): 7117936 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521116 kB' 'Mapped: 190008 kB' 'Shmem: 6600008 kB' 'KReclaimable: 191140 kB' 'Slab: 556452 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365312 kB' 'KernelStack: 12800 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8233160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.046 nr_hugepages=1024 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.046 resv_hugepages=0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.046 surplus_hugepages=0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.046 anon_hugepages=0 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.046 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45551932 kB' 'MemAvailable: 49054784 kB' 'Buffers: 2704 kB' 'Cached: 10498252 kB' 'SwapCached: 0 kB' 'Active: 7514456 kB' 'Inactive: 3506552 kB' 'Active(anon): 7120104 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523288 kB' 'Mapped: 190384 kB' 'Shmem: 6600052 kB' 'KReclaimable: 191140 kB' 'Slab: 556448 kB' 'SReclaimable: 191140 kB' 'SUnreclaim: 365308 kB' 'KernelStack: 12848 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8234908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1764956 kB' 'DirectMap2M: 15980544 kB' 'DirectMap1G: 51380224 kB' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.047 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.048 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27079956 kB' 'MemUsed: 5749928 kB' 'SwapCached: 0 kB' 'Active: 2370920 kB' 'Inactive: 108696 kB' 'Active(anon): 2260032 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2251524 kB' 'Mapped: 28284 kB' 'AnonPages: 231224 kB' 'Shmem: 2031940 kB' 'KernelStack: 7528 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92336 kB' 'Slab: 314528 kB' 'SReclaimable: 92336 kB' 'SUnreclaim: 222192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.308 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.309 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.310 node0=1024 expecting 1024 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.310 00:03:31.310 real 0m2.850s 00:03:31.310 user 0m1.178s 00:03:31.310 sys 0m1.596s 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.310 04:52:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.310 ************************************ 00:03:31.310 END TEST no_shrink_alloc 00:03:31.310 ************************************ 00:03:31.310 04:52:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.310 04:52:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.310 00:03:31.310 real 0m11.280s 00:03:31.310 user 0m4.338s 00:03:31.310 sys 0m5.845s 00:03:31.310 04:52:37 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.310 04:52:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.310 ************************************ 00:03:31.310 END TEST hugepages 00:03:31.310 ************************************ 00:03:31.310 04:52:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:31.310 04:52:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.310 04:52:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.310 04:52:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.310 04:52:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.310 ************************************ 00:03:31.310 START TEST driver 00:03:31.310 ************************************ 00:03:31.310 04:52:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.310 * Looking for test storage... 00:03:31.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.310 04:52:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.310 04:52:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.310 04:52:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.856 04:52:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.856 04:52:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.856 04:52:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.856 04:52:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.856 ************************************ 00:03:33.856 START TEST guess_driver 00:03:33.856 ************************************ 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.856 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.856 Looking for driver=vfio-pci 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.856 04:52:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.230 04:52:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.166 04:52:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.691 00:03:38.691 real 0m4.811s 00:03:38.691 user 0m1.107s 00:03:38.691 sys 0m1.807s 00:03:38.691 04:52:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.691 04:52:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.691 ************************************ 00:03:38.691 END TEST guess_driver 00:03:38.691 ************************************ 00:03:38.691 04:52:45 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:38.691 00:03:38.691 real 0m7.419s 00:03:38.691 user 0m1.750s 00:03:38.691 sys 0m2.810s 00:03:38.691 04:52:45 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.691 04:52:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.691 ************************************ 00:03:38.691 END TEST driver 00:03:38.691 ************************************ 00:03:38.691 04:52:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.691 04:52:45 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.691 04:52:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.691 04:52:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.691 04:52:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.691 ************************************ 00:03:38.691 START TEST devices 00:03:38.691 ************************************ 00:03:38.691 04:52:45 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.691 * Looking for test storage... 00:03:38.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.691 04:52:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:38.691 04:52:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:38.691 04:52:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.691 04:52:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:40.076 04:52:46 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:40.076 No valid GPT data, bailing 00:03:40.076 04:52:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.076 04:52:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.076 04:52:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.076 04:52:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.076 04:52:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.076 ************************************ 00:03:40.076 START TEST nvme_mount 00:03:40.076 ************************************ 00:03:40.076 04:52:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:40.076 04:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.076 04:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.076 04:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.076 04:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.077 04:52:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.455 Creating new GPT entries in memory. 00:03:41.455 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.455 other utilities. 00:03:41.455 04:52:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.455 04:52:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.455 04:52:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.455 04:52:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.455 04:52:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:42.390 Creating new GPT entries in memory. 00:03:42.390 The operation has completed successfully. 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 540839 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.390 04:52:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.325 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.584 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.584 04:52:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.842 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:43.842 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:43.842 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.842 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.842 04:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.218 04:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.152 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.153 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.153 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.153 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.412 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.412 00:03:46.412 real 0m6.173s 00:03:46.412 user 0m1.473s 00:03:46.412 sys 0m2.247s 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.412 04:52:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.412 ************************************ 00:03:46.412 END TEST nvme_mount 00:03:46.412 ************************************ 00:03:46.412 04:52:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:46.412 04:52:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.412 04:52:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.412 04:52:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.412 04:52:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.412 ************************************ 00:03:46.412 START TEST dm_mount 00:03:46.412 ************************************ 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.412 04:52:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.346 Creating new GPT entries in memory. 00:03:47.346 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.346 other utilities. 00:03:47.346 04:52:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.346 04:52:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.346 04:52:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.346 04:52:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.346 04:52:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.723 Creating new GPT entries in memory. 00:03:48.723 The operation has completed successfully. 00:03:48.723 04:52:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.723 04:52:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.723 04:52:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.723 04:52:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.723 04:52:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:49.659 The operation has completed successfully. 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 543226 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.659 04:52:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.593 04:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.853 04:52:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.789 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.049 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:52.049 00:03:52.049 real 0m5.678s 00:03:52.049 user 0m0.950s 00:03:52.049 sys 0m1.586s 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.049 04:52:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:52.049 ************************************ 00:03:52.049 END TEST dm_mount 00:03:52.049 ************************************ 00:03:52.049 04:52:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.049 04:52:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.308 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.308 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.308 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.308 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.308 04:52:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.308 00:03:52.308 real 0m13.671s 00:03:52.308 user 0m3.051s 00:03:52.308 sys 0m4.784s 00:03:52.308 04:52:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.308 04:52:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.308 ************************************ 00:03:52.308 END TEST devices 00:03:52.308 ************************************ 00:03:52.308 04:52:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.308 00:03:52.308 real 0m42.895s 00:03:52.308 user 0m12.425s 00:03:52.308 sys 0m18.686s 00:03:52.308 04:52:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.308 04:52:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.308 ************************************ 00:03:52.308 END TEST setup.sh 00:03:52.308 ************************************ 00:03:52.566 04:52:58 -- common/autotest_common.sh@1142 -- # return 0 00:03:52.566 04:52:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.500 Hugepages 00:03:53.500 node hugesize free / total 00:03:53.501 node0 1048576kB 0 / 0 00:03:53.501 node0 2048kB 2048 / 2048 00:03:53.501 node1 1048576kB 0 / 0 00:03:53.501 node1 2048kB 0 / 0 00:03:53.501 00:03:53.501 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.501 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:53.501 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:53.501 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:53.758 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.758 04:53:00 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.758 04:53:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.758 04:53:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.758 04:53:00 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.722 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.722 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.722 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.722 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.722 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.980 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.980 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.980 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.980 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.980 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.915 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.915 04:53:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:56.852 04:53:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:56.852 04:53:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:56.852 04:53:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.852 04:53:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:56.852 04:53:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:56.852 04:53:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:56.852 04:53:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.852 04:53:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.852 04:53:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:56.852 04:53:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:56.852 04:53:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:56.852 04:53:03 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.225 Waiting for block devices as requested 00:03:58.225 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.225 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.225 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.484 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.484 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.484 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:58.484 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:58.742 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:58.742 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:58.742 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.742 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.999 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.999 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.999 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:58.999 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:59.255 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:59.255 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:59.513 04:53:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:59.513 04:53:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:59.513 04:53:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:59.513 04:53:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:59.513 04:53:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:59.513 04:53:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:59.513 04:53:05 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:59.513 04:53:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:59.513 04:53:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:59.513 04:53:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:59.513 04:53:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:59.513 04:53:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:59.513 04:53:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:59.513 04:53:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:59.513 04:53:05 -- common/autotest_common.sh@1557 -- # continue 00:03:59.513 04:53:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:59.513 04:53:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.513 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:03:59.513 04:53:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:59.513 04:53:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.513 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:03:59.513 04:53:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.446 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.446 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.446 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.703 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.636 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.636 04:53:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:01.636 04:53:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.636 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:04:01.636 04:53:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:01.636 04:53:08 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:01.636 04:53:08 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.636 04:53:08 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:01.636 04:53:08 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:01.636 04:53:08 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:01.636 04:53:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:01.636 04:53:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:01.636 04:53:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.636 04:53:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.636 04:53:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:01.636 04:53:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:01.636 04:53:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:01.636 04:53:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:01.636 04:53:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:01.636 04:53:08 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:01.636 04:53:08 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.636 04:53:08 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:01.636 04:53:08 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:01.636 04:53:08 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:01.636 04:53:08 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=548396 00:04:01.636 04:53:08 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.636 04:53:08 -- common/autotest_common.sh@1598 -- # waitforlisten 548396 00:04:01.636 04:53:08 -- common/autotest_common.sh@829 -- # '[' -z 548396 ']' 00:04:01.636 04:53:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.636 04:53:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.636 04:53:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.636 04:53:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.636 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:04:01.894 [2024-07-13 04:53:08.220969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:01.894 [2024-07-13 04:53:08.221120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548396 ] 00:04:01.894 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.894 [2024-07-13 04:53:08.352781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.152 [2024-07-13 04:53:08.605899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.086 04:53:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.086 04:53:09 -- common/autotest_common.sh@862 -- # return 0 00:04:03.086 04:53:09 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:03.086 04:53:09 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:03.086 04:53:09 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:06.370 nvme0n1 00:04:06.370 04:53:12 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:06.370 [2024-07-13 04:53:12.859610] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:06.370 [2024-07-13 04:53:12.859695] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:06.370 request: 00:04:06.370 { 00:04:06.370 "nvme_ctrlr_name": "nvme0", 00:04:06.370 "password": "test", 00:04:06.370 "method": "bdev_nvme_opal_revert", 00:04:06.370 "req_id": 1 00:04:06.370 } 00:04:06.370 Got JSON-RPC error response 00:04:06.370 response: 00:04:06.370 { 00:04:06.370 "code": -32603, 00:04:06.370 "message": "Internal error" 00:04:06.370 } 00:04:06.628 04:53:12 -- common/autotest_common.sh@1604 -- # true 00:04:06.628 04:53:12 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:06.628 04:53:12 -- common/autotest_common.sh@1608 -- # killprocess 548396 00:04:06.628 04:53:12 -- common/autotest_common.sh@948 -- # '[' -z 548396 ']' 00:04:06.628 04:53:12 -- common/autotest_common.sh@952 -- # kill -0 548396 00:04:06.628 04:53:12 -- common/autotest_common.sh@953 -- # uname 00:04:06.628 04:53:12 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:06.628 04:53:12 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 548396 00:04:06.628 04:53:12 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:06.628 04:53:12 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:06.628 04:53:12 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 548396' 00:04:06.628 killing process with pid 548396 00:04:06.628 04:53:12 -- common/autotest_common.sh@967 -- # kill 548396 00:04:06.628 04:53:12 -- common/autotest_common.sh@972 -- # wait 548396 00:04:10.824 04:53:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:10.824 04:53:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:10.824 04:53:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:10.824 04:53:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:10.824 04:53:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:10.824 04:53:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.824 04:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.824 04:53:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:10.824 04:53:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:10.824 04:53:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.824 04:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.824 04:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.824 ************************************ 00:04:10.824 START TEST env 00:04:10.824 ************************************ 00:04:10.824 04:53:16 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:10.824 * Looking for test storage... 00:04:10.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:10.824 04:53:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:10.824 04:53:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.824 04:53:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.824 04:53:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.824 ************************************ 00:04:10.824 START TEST env_memory 00:04:10.824 ************************************ 00:04:10.824 04:53:16 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:10.824 00:04:10.824 00:04:10.824 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.824 http://cunit.sourceforge.net/ 00:04:10.824 00:04:10.824 00:04:10.824 Suite: memory 00:04:10.824 Test: alloc and free memory map ...[2024-07-13 04:53:16.805099] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.824 passed 00:04:10.824 Test: mem map translation ...[2024-07-13 04:53:16.846279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.824 [2024-07-13 04:53:16.846320] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.824 [2024-07-13 04:53:16.846396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.824 [2024-07-13 04:53:16.846426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.824 passed 00:04:10.824 Test: mem map registration ...[2024-07-13 04:53:16.918593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:10.824 [2024-07-13 04:53:16.918637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:10.824 passed 00:04:10.824 Test: mem map adjacent registrations ...passed 00:04:10.824 00:04:10.824 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.824 suites 1 1 n/a 0 0 00:04:10.824 tests 4 4 4 0 0 00:04:10.824 asserts 152 152 152 0 n/a 00:04:10.824 00:04:10.824 Elapsed time = 0.246 seconds 00:04:10.824 00:04:10.824 real 0m0.267s 00:04:10.824 user 0m0.252s 00:04:10.824 sys 0m0.014s 00:04:10.824 04:53:17 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.824 04:53:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:10.824 ************************************ 00:04:10.824 END TEST env_memory 00:04:10.824 ************************************ 00:04:10.824 04:53:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:10.824 04:53:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:10.824 04:53:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.824 04:53:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.824 04:53:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.824 ************************************ 00:04:10.824 START TEST env_vtophys 00:04:10.824 ************************************ 00:04:10.824 04:53:17 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:10.824 EAL: lib.eal log level changed from notice to debug 00:04:10.824 EAL: Detected lcore 0 as core 0 on socket 0 00:04:10.824 EAL: Detected lcore 1 as core 1 on socket 0 00:04:10.824 EAL: Detected lcore 2 as core 2 on socket 0 00:04:10.824 EAL: Detected lcore 3 as core 3 on socket 0 00:04:10.824 EAL: Detected lcore 4 as core 4 on socket 0 00:04:10.824 EAL: Detected lcore 5 as core 5 on socket 0 00:04:10.824 EAL: Detected lcore 6 as core 8 on socket 0 00:04:10.824 EAL: Detected lcore 7 as core 9 on socket 0 00:04:10.824 EAL: Detected lcore 8 as core 10 on socket 0 00:04:10.824 EAL: Detected lcore 9 as core 11 on socket 0 00:04:10.824 EAL: Detected lcore 10 as core 12 on socket 0 00:04:10.824 EAL: Detected lcore 11 as core 13 on socket 0 00:04:10.824 EAL: Detected lcore 12 as core 0 on socket 1 00:04:10.824 EAL: Detected lcore 13 as core 1 on socket 1 00:04:10.824 EAL: Detected lcore 14 as core 2 on socket 1 00:04:10.824 EAL: Detected lcore 15 as core 3 on socket 1 00:04:10.824 EAL: Detected lcore 16 as core 4 on socket 1 00:04:10.824 EAL: Detected lcore 17 as core 5 on socket 1 00:04:10.824 EAL: Detected lcore 18 as core 8 on socket 1 00:04:10.824 EAL: Detected lcore 19 as core 9 on socket 1 00:04:10.824 EAL: Detected lcore 20 as core 10 on socket 1 00:04:10.824 EAL: Detected lcore 21 as core 11 on socket 1 00:04:10.824 EAL: Detected lcore 22 as core 12 on socket 1 00:04:10.824 EAL: Detected lcore 23 as core 13 on socket 1 00:04:10.824 EAL: Detected lcore 24 as core 0 on socket 0 00:04:10.824 EAL: Detected lcore 25 as core 1 on socket 0 00:04:10.824 EAL: Detected lcore 26 as core 2 on socket 0 00:04:10.824 EAL: Detected lcore 27 as core 3 on socket 0 00:04:10.824 EAL: Detected lcore 28 as core 4 on socket 0 00:04:10.824 EAL: Detected lcore 29 as core 5 on socket 0 00:04:10.824 EAL: Detected lcore 30 as core 8 on socket 0 00:04:10.824 EAL: Detected lcore 31 as core 9 on socket 0 00:04:10.824 EAL: Detected lcore 32 as core 10 on socket 0 00:04:10.825 EAL: Detected lcore 33 as core 11 on socket 0 00:04:10.825 EAL: Detected lcore 34 as core 12 on socket 0 00:04:10.825 EAL: Detected lcore 35 as core 13 on socket 0 00:04:10.825 EAL: Detected lcore 36 as core 0 on socket 1 00:04:10.825 EAL: Detected lcore 37 as core 1 on socket 1 00:04:10.825 EAL: Detected lcore 38 as core 2 on socket 1 00:04:10.825 EAL: Detected lcore 39 as core 3 on socket 1 00:04:10.825 EAL: Detected lcore 40 as core 4 on socket 1 00:04:10.825 EAL: Detected lcore 41 as core 5 on socket 1 00:04:10.825 EAL: Detected lcore 42 as core 8 on socket 1 00:04:10.825 EAL: Detected lcore 43 as core 9 on socket 1 00:04:10.825 EAL: Detected lcore 44 as core 10 on socket 1 00:04:10.825 EAL: Detected lcore 45 as core 11 on socket 1 00:04:10.825 EAL: Detected lcore 46 as core 12 on socket 1 00:04:10.825 EAL: Detected lcore 47 as core 13 on socket 1 00:04:10.825 EAL: Maximum logical cores by configuration: 128 00:04:10.825 EAL: Detected CPU lcores: 48 00:04:10.825 EAL: Detected NUMA nodes: 2 00:04:10.825 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:10.825 EAL: Detected shared linkage of DPDK 00:04:10.825 EAL: No shared files mode enabled, IPC will be disabled 00:04:10.825 EAL: Bus pci wants IOVA as 'DC' 00:04:10.825 EAL: Buses did not request a specific IOVA mode. 00:04:10.825 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:10.825 EAL: Selected IOVA mode 'VA' 00:04:10.825 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.825 EAL: Probing VFIO support... 00:04:10.825 EAL: IOMMU type 1 (Type 1) is supported 00:04:10.825 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:10.825 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:10.825 EAL: VFIO support initialized 00:04:10.825 EAL: Ask a virtual area of 0x2e000 bytes 00:04:10.825 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:10.825 EAL: Setting up physically contiguous memory... 00:04:10.825 EAL: Setting maximum number of open files to 524288 00:04:10.825 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:10.825 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:10.825 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:10.825 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:10.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.825 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:10.825 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.825 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:10.825 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:10.825 EAL: Hugepages will be freed exactly as allocated. 00:04:10.825 EAL: No shared files mode enabled, IPC is disabled 00:04:10.825 EAL: No shared files mode enabled, IPC is disabled 00:04:10.825 EAL: TSC frequency is ~2700000 KHz 00:04:10.825 EAL: Main lcore 0 is ready (tid=7efd86f3ea40;cpuset=[0]) 00:04:10.825 EAL: Trying to obtain current memory policy. 00:04:10.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.825 EAL: Restoring previous memory policy: 0 00:04:10.825 EAL: request: mp_malloc_sync 00:04:10.825 EAL: No shared files mode enabled, IPC is disabled 00:04:10.825 EAL: Heap on socket 0 was expanded by 2MB 00:04:10.825 EAL: No shared files mode enabled, IPC is disabled 00:04:10.825 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:10.825 EAL: Mem event callback 'spdk:(nil)' registered 00:04:10.825 00:04:10.825 00:04:10.825 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.825 http://cunit.sourceforge.net/ 00:04:10.825 00:04:10.825 00:04:10.825 Suite: components_suite 00:04:11.391 Test: vtophys_malloc_test ...passed 00:04:11.391 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.391 EAL: Restoring previous memory policy: 4 00:04:11.391 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.391 EAL: request: mp_malloc_sync 00:04:11.391 EAL: No shared files mode enabled, IPC is disabled 00:04:11.391 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.391 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.391 EAL: request: mp_malloc_sync 00:04:11.391 EAL: No shared files mode enabled, IPC is disabled 00:04:11.391 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.391 EAL: Trying to obtain current memory policy. 00:04:11.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.391 EAL: Restoring previous memory policy: 4 00:04:11.391 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.391 EAL: request: mp_malloc_sync 00:04:11.391 EAL: No shared files mode enabled, IPC is disabled 00:04:11.391 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.391 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.391 EAL: request: mp_malloc_sync 00:04:11.391 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.392 EAL: Trying to obtain current memory policy. 00:04:11.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.392 EAL: Restoring previous memory policy: 4 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.392 EAL: Trying to obtain current memory policy. 00:04:11.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.392 EAL: Restoring previous memory policy: 4 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.392 EAL: Trying to obtain current memory policy. 00:04:11.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.392 EAL: Restoring previous memory policy: 4 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.392 EAL: Trying to obtain current memory policy. 00:04:11.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.392 EAL: Restoring previous memory policy: 4 00:04:11.392 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.392 EAL: request: mp_malloc_sync 00:04:11.392 EAL: No shared files mode enabled, IPC is disabled 00:04:11.392 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.650 EAL: request: mp_malloc_sync 00:04:11.650 EAL: No shared files mode enabled, IPC is disabled 00:04:11.650 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.650 EAL: Trying to obtain current memory policy. 00:04:11.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.650 EAL: Restoring previous memory policy: 4 00:04:11.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.650 EAL: request: mp_malloc_sync 00:04:11.650 EAL: No shared files mode enabled, IPC is disabled 00:04:11.650 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.166 EAL: request: mp_malloc_sync 00:04:12.166 EAL: No shared files mode enabled, IPC is disabled 00:04:12.166 EAL: Heap on socket 0 was shrunk by 130MB 00:04:12.166 EAL: Trying to obtain current memory policy. 00:04:12.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.424 EAL: Restoring previous memory policy: 4 00:04:12.424 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.424 EAL: request: mp_malloc_sync 00:04:12.424 EAL: No shared files mode enabled, IPC is disabled 00:04:12.424 EAL: Heap on socket 0 was expanded by 258MB 00:04:12.682 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.939 EAL: request: mp_malloc_sync 00:04:12.939 EAL: No shared files mode enabled, IPC is disabled 00:04:12.939 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.197 EAL: Trying to obtain current memory policy. 00:04:13.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.455 EAL: Restoring previous memory policy: 4 00:04:13.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.455 EAL: request: mp_malloc_sync 00:04:13.455 EAL: No shared files mode enabled, IPC is disabled 00:04:13.455 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.388 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.388 EAL: request: mp_malloc_sync 00:04:14.388 EAL: No shared files mode enabled, IPC is disabled 00:04:14.388 EAL: Heap on socket 0 was shrunk by 514MB 00:04:15.322 EAL: Trying to obtain current memory policy. 00:04:15.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.580 EAL: Restoring previous memory policy: 4 00:04:15.580 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.580 EAL: request: mp_malloc_sync 00:04:15.580 EAL: No shared files mode enabled, IPC is disabled 00:04:15.580 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.734 EAL: request: mp_malloc_sync 00:04:17.734 EAL: No shared files mode enabled, IPC is disabled 00:04:17.734 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.634 passed 00:04:19.634 00:04:19.634 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.634 suites 1 1 n/a 0 0 00:04:19.634 tests 2 2 2 0 0 00:04:19.634 asserts 497 497 497 0 n/a 00:04:19.634 00:04:19.634 Elapsed time = 8.354 seconds 00:04:19.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.634 EAL: request: mp_malloc_sync 00:04:19.634 EAL: No shared files mode enabled, IPC is disabled 00:04:19.634 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.634 EAL: No shared files mode enabled, IPC is disabled 00:04:19.634 EAL: No shared files mode enabled, IPC is disabled 00:04:19.634 EAL: No shared files mode enabled, IPC is disabled 00:04:19.634 00:04:19.634 real 0m8.616s 00:04:19.634 user 0m7.495s 00:04:19.634 sys 0m1.060s 00:04:19.634 04:53:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.634 04:53:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.634 ************************************ 00:04:19.634 END TEST env_vtophys 00:04:19.634 ************************************ 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.634 04:53:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.634 04:53:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.634 ************************************ 00:04:19.634 START TEST env_pci 00:04:19.634 ************************************ 00:04:19.634 04:53:25 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.634 00:04:19.634 00:04:19.634 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.634 http://cunit.sourceforge.net/ 00:04:19.634 00:04:19.634 00:04:19.634 Suite: pci 00:04:19.634 Test: pci_hook ...[2024-07-13 04:53:25.756709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 550489 has claimed it 00:04:19.634 EAL: Cannot find device (10000:00:01.0) 00:04:19.634 EAL: Failed to attach device on primary process 00:04:19.634 passed 00:04:19.634 00:04:19.634 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.634 suites 1 1 n/a 0 0 00:04:19.634 tests 1 1 1 0 0 00:04:19.634 asserts 25 25 25 0 n/a 00:04:19.634 00:04:19.634 Elapsed time = 0.042 seconds 00:04:19.634 00:04:19.634 real 0m0.093s 00:04:19.634 user 0m0.037s 00:04:19.634 sys 0m0.055s 00:04:19.634 04:53:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.634 04:53:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.634 ************************************ 00:04:19.634 END TEST env_pci 00:04:19.634 ************************************ 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.634 04:53:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.634 04:53:25 env -- env/env.sh@15 -- # uname 00:04:19.634 04:53:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.634 04:53:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.634 04:53:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:19.634 04:53:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.634 04:53:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.634 ************************************ 00:04:19.634 START TEST env_dpdk_post_init 00:04:19.634 ************************************ 00:04:19.634 04:53:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.634 EAL: Detected CPU lcores: 48 00:04:19.634 EAL: Detected NUMA nodes: 2 00:04:19.634 EAL: Detected shared linkage of DPDK 00:04:19.634 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.634 EAL: Selected IOVA mode 'VA' 00:04:19.634 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.634 EAL: VFIO support initialized 00:04:19.634 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.634 EAL: Using IOMMU type 1 (Type 1) 00:04:19.634 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:19.634 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:19.893 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:20.827 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:24.101 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:24.101 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:24.101 Starting DPDK initialization... 00:04:24.101 Starting SPDK post initialization... 00:04:24.101 SPDK NVMe probe 00:04:24.101 Attaching to 0000:88:00.0 00:04:24.101 Attached to 0000:88:00.0 00:04:24.101 Cleaning up... 00:04:24.101 00:04:24.101 real 0m4.547s 00:04:24.101 user 0m3.349s 00:04:24.101 sys 0m0.249s 00:04:24.101 04:53:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.101 04:53:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.101 ************************************ 00:04:24.101 END TEST env_dpdk_post_init 00:04:24.101 ************************************ 00:04:24.101 04:53:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.101 04:53:30 env -- env/env.sh@26 -- # uname 00:04:24.101 04:53:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.101 04:53:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.101 04:53:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.101 04:53:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.101 04:53:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.101 ************************************ 00:04:24.101 START TEST env_mem_callbacks 00:04:24.101 ************************************ 00:04:24.101 04:53:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.101 EAL: Detected CPU lcores: 48 00:04:24.101 EAL: Detected NUMA nodes: 2 00:04:24.101 EAL: Detected shared linkage of DPDK 00:04:24.101 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.101 EAL: Selected IOVA mode 'VA' 00:04:24.101 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.101 EAL: VFIO support initialized 00:04:24.101 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.101 00:04:24.101 00:04:24.101 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.101 http://cunit.sourceforge.net/ 00:04:24.101 00:04:24.101 00:04:24.101 Suite: memory 00:04:24.101 Test: test ... 00:04:24.101 register 0x200000200000 2097152 00:04:24.101 malloc 3145728 00:04:24.101 register 0x200000400000 4194304 00:04:24.101 buf 0x2000004fffc0 len 3145728 PASSED 00:04:24.101 malloc 64 00:04:24.101 buf 0x2000004ffec0 len 64 PASSED 00:04:24.101 malloc 4194304 00:04:24.101 register 0x200000800000 6291456 00:04:24.101 buf 0x2000009fffc0 len 4194304 PASSED 00:04:24.101 free 0x2000004fffc0 3145728 00:04:24.101 free 0x2000004ffec0 64 00:04:24.101 unregister 0x200000400000 4194304 PASSED 00:04:24.101 free 0x2000009fffc0 4194304 00:04:24.101 unregister 0x200000800000 6291456 PASSED 00:04:24.359 malloc 8388608 00:04:24.359 register 0x200000400000 10485760 00:04:24.359 buf 0x2000005fffc0 len 8388608 PASSED 00:04:24.359 free 0x2000005fffc0 8388608 00:04:24.359 unregister 0x200000400000 10485760 PASSED 00:04:24.359 passed 00:04:24.359 00:04:24.359 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.359 suites 1 1 n/a 0 0 00:04:24.359 tests 1 1 1 0 0 00:04:24.359 asserts 15 15 15 0 n/a 00:04:24.359 00:04:24.359 Elapsed time = 0.061 seconds 00:04:24.359 00:04:24.359 real 0m0.182s 00:04:24.359 user 0m0.104s 00:04:24.359 sys 0m0.077s 00:04:24.359 04:53:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.359 04:53:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.359 ************************************ 00:04:24.359 END TEST env_mem_callbacks 00:04:24.359 ************************************ 00:04:24.359 04:53:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.359 00:04:24.359 real 0m13.994s 00:04:24.359 user 0m11.357s 00:04:24.359 sys 0m1.644s 00:04:24.359 04:53:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.359 04:53:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.359 ************************************ 00:04:24.359 END TEST env 00:04:24.359 ************************************ 00:04:24.359 04:53:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.359 04:53:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.359 04:53:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.359 04:53:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.359 04:53:30 -- common/autotest_common.sh@10 -- # set +x 00:04:24.359 ************************************ 00:04:24.359 START TEST rpc 00:04:24.359 ************************************ 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.359 * Looking for test storage... 00:04:24.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.359 04:53:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=551268 00:04:24.359 04:53:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:24.359 04:53:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.359 04:53:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 551268 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 551268 ']' 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.359 04:53:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.617 [2024-07-13 04:53:30.865978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:24.617 [2024-07-13 04:53:30.866123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551268 ] 00:04:24.617 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.617 [2024-07-13 04:53:30.990895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.875 [2024-07-13 04:53:31.241927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.875 [2024-07-13 04:53:31.242016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 551268' to capture a snapshot of events at runtime. 00:04:24.875 [2024-07-13 04:53:31.242041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.875 [2024-07-13 04:53:31.242070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.875 [2024-07-13 04:53:31.242089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid551268 for offline analysis/debug. 00:04:24.875 [2024-07-13 04:53:31.242146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.844 04:53:32 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.844 04:53:32 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.844 04:53:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.844 04:53:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.844 04:53:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.844 04:53:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.844 04:53:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.844 04:53:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.844 04:53:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.844 ************************************ 00:04:25.844 START TEST rpc_integrity 00:04:25.844 ************************************ 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.844 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.844 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.844 { 00:04:25.844 "name": "Malloc0", 00:04:25.844 "aliases": [ 00:04:25.844 "beb526f5-b657-41a1-8954-c6ba52abd622" 00:04:25.844 ], 00:04:25.844 "product_name": "Malloc disk", 00:04:25.844 "block_size": 512, 00:04:25.844 "num_blocks": 16384, 00:04:25.844 "uuid": "beb526f5-b657-41a1-8954-c6ba52abd622", 00:04:25.844 "assigned_rate_limits": { 00:04:25.844 "rw_ios_per_sec": 0, 00:04:25.844 "rw_mbytes_per_sec": 0, 00:04:25.844 "r_mbytes_per_sec": 0, 00:04:25.844 "w_mbytes_per_sec": 0 00:04:25.844 }, 00:04:25.844 "claimed": false, 00:04:25.844 "zoned": false, 00:04:25.844 "supported_io_types": { 00:04:25.844 "read": true, 00:04:25.844 "write": true, 00:04:25.844 "unmap": true, 00:04:25.844 "flush": true, 00:04:25.844 "reset": true, 00:04:25.844 "nvme_admin": false, 00:04:25.844 "nvme_io": false, 00:04:25.844 "nvme_io_md": false, 00:04:25.844 "write_zeroes": true, 00:04:25.844 "zcopy": true, 00:04:25.844 "get_zone_info": false, 00:04:25.845 "zone_management": false, 00:04:25.845 "zone_append": false, 00:04:25.845 "compare": false, 00:04:25.845 "compare_and_write": false, 00:04:25.845 "abort": true, 00:04:25.845 "seek_hole": false, 00:04:25.845 "seek_data": false, 00:04:25.845 "copy": true, 00:04:25.845 "nvme_iov_md": false 00:04:25.845 }, 00:04:25.845 "memory_domains": [ 00:04:25.845 { 00:04:25.845 "dma_device_id": "system", 00:04:25.845 "dma_device_type": 1 00:04:25.845 }, 00:04:25.845 { 00:04:25.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.845 "dma_device_type": 2 00:04:25.845 } 00:04:25.845 ], 00:04:25.845 "driver_specific": {} 00:04:25.845 } 00:04:25.845 ]' 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.845 [2024-07-13 04:53:32.247143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.845 [2024-07-13 04:53:32.247237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.845 [2024-07-13 04:53:32.247283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:25.845 [2024-07-13 04:53:32.247314] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.845 [2024-07-13 04:53:32.250042] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.845 [2024-07-13 04:53:32.250086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.845 Passthru0 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.845 { 00:04:25.845 "name": "Malloc0", 00:04:25.845 "aliases": [ 00:04:25.845 "beb526f5-b657-41a1-8954-c6ba52abd622" 00:04:25.845 ], 00:04:25.845 "product_name": "Malloc disk", 00:04:25.845 "block_size": 512, 00:04:25.845 "num_blocks": 16384, 00:04:25.845 "uuid": "beb526f5-b657-41a1-8954-c6ba52abd622", 00:04:25.845 "assigned_rate_limits": { 00:04:25.845 "rw_ios_per_sec": 0, 00:04:25.845 "rw_mbytes_per_sec": 0, 00:04:25.845 "r_mbytes_per_sec": 0, 00:04:25.845 "w_mbytes_per_sec": 0 00:04:25.845 }, 00:04:25.845 "claimed": true, 00:04:25.845 "claim_type": "exclusive_write", 00:04:25.845 "zoned": false, 00:04:25.845 "supported_io_types": { 00:04:25.845 "read": true, 00:04:25.845 "write": true, 00:04:25.845 "unmap": true, 00:04:25.845 "flush": true, 00:04:25.845 "reset": true, 00:04:25.845 "nvme_admin": false, 00:04:25.845 "nvme_io": false, 00:04:25.845 "nvme_io_md": false, 00:04:25.845 "write_zeroes": true, 00:04:25.845 "zcopy": true, 00:04:25.845 "get_zone_info": false, 00:04:25.845 "zone_management": false, 00:04:25.845 "zone_append": false, 00:04:25.845 "compare": false, 00:04:25.845 "compare_and_write": false, 00:04:25.845 "abort": true, 00:04:25.845 "seek_hole": false, 00:04:25.845 "seek_data": false, 00:04:25.845 "copy": true, 00:04:25.845 "nvme_iov_md": false 00:04:25.845 }, 00:04:25.845 "memory_domains": [ 00:04:25.845 { 00:04:25.845 "dma_device_id": "system", 00:04:25.845 "dma_device_type": 1 00:04:25.845 }, 00:04:25.845 { 00:04:25.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.845 "dma_device_type": 2 00:04:25.845 } 00:04:25.845 ], 00:04:25.845 "driver_specific": {} 00:04:25.845 }, 00:04:25.845 { 00:04:25.845 "name": "Passthru0", 00:04:25.845 "aliases": [ 00:04:25.845 "44c6e4d4-3d93-5475-a3aa-daf963e824df" 00:04:25.845 ], 00:04:25.845 "product_name": "passthru", 00:04:25.845 "block_size": 512, 00:04:25.845 "num_blocks": 16384, 00:04:25.845 "uuid": "44c6e4d4-3d93-5475-a3aa-daf963e824df", 00:04:25.845 "assigned_rate_limits": { 00:04:25.845 "rw_ios_per_sec": 0, 00:04:25.845 "rw_mbytes_per_sec": 0, 00:04:25.845 "r_mbytes_per_sec": 0, 00:04:25.845 "w_mbytes_per_sec": 0 00:04:25.845 }, 00:04:25.845 "claimed": false, 00:04:25.845 "zoned": false, 00:04:25.845 "supported_io_types": { 00:04:25.845 "read": true, 00:04:25.845 "write": true, 00:04:25.845 "unmap": true, 00:04:25.845 "flush": true, 00:04:25.845 "reset": true, 00:04:25.845 "nvme_admin": false, 00:04:25.845 "nvme_io": false, 00:04:25.845 "nvme_io_md": false, 00:04:25.845 "write_zeroes": true, 00:04:25.845 "zcopy": true, 00:04:25.845 "get_zone_info": false, 00:04:25.845 "zone_management": false, 00:04:25.845 "zone_append": false, 00:04:25.845 "compare": false, 00:04:25.845 "compare_and_write": false, 00:04:25.845 "abort": true, 00:04:25.845 "seek_hole": false, 00:04:25.845 "seek_data": false, 00:04:25.845 "copy": true, 00:04:25.845 "nvme_iov_md": false 00:04:25.845 }, 00:04:25.845 "memory_domains": [ 00:04:25.845 { 00:04:25.845 "dma_device_id": "system", 00:04:25.845 "dma_device_type": 1 00:04:25.845 }, 00:04:25.845 { 00:04:25.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.845 "dma_device_type": 2 00:04:25.845 } 00:04:25.845 ], 00:04:25.845 "driver_specific": { 00:04:25.845 "passthru": { 00:04:25.845 "name": "Passthru0", 00:04:25.845 "base_bdev_name": "Malloc0" 00:04:25.845 } 00:04:25.845 } 00:04:25.845 } 00:04:25.845 ]' 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.845 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.845 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.103 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.103 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.103 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.103 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.103 04:53:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.103 00:04:26.103 real 0m0.260s 00:04:26.103 user 0m0.154s 00:04:26.103 sys 0m0.020s 00:04:26.103 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.103 04:53:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.103 ************************************ 00:04:26.103 END TEST rpc_integrity 00:04:26.103 ************************************ 00:04:26.103 04:53:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.103 04:53:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.103 04:53:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.103 04:53:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.103 04:53:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.103 ************************************ 00:04:26.103 START TEST rpc_plugins 00:04:26.103 ************************************ 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:26.103 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.103 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:26.103 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.103 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.103 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:26.103 { 00:04:26.103 "name": "Malloc1", 00:04:26.103 "aliases": [ 00:04:26.103 "c4fc8eab-fd14-42aa-a0f7-49b44cf69816" 00:04:26.103 ], 00:04:26.103 "product_name": "Malloc disk", 00:04:26.103 "block_size": 4096, 00:04:26.103 "num_blocks": 256, 00:04:26.103 "uuid": "c4fc8eab-fd14-42aa-a0f7-49b44cf69816", 00:04:26.103 "assigned_rate_limits": { 00:04:26.103 "rw_ios_per_sec": 0, 00:04:26.103 "rw_mbytes_per_sec": 0, 00:04:26.103 "r_mbytes_per_sec": 0, 00:04:26.104 "w_mbytes_per_sec": 0 00:04:26.104 }, 00:04:26.104 "claimed": false, 00:04:26.104 "zoned": false, 00:04:26.104 "supported_io_types": { 00:04:26.104 "read": true, 00:04:26.104 "write": true, 00:04:26.104 "unmap": true, 00:04:26.104 "flush": true, 00:04:26.104 "reset": true, 00:04:26.104 "nvme_admin": false, 00:04:26.104 "nvme_io": false, 00:04:26.104 "nvme_io_md": false, 00:04:26.104 "write_zeroes": true, 00:04:26.104 "zcopy": true, 00:04:26.104 "get_zone_info": false, 00:04:26.104 "zone_management": false, 00:04:26.104 "zone_append": false, 00:04:26.104 "compare": false, 00:04:26.104 "compare_and_write": false, 00:04:26.104 "abort": true, 00:04:26.104 "seek_hole": false, 00:04:26.104 "seek_data": false, 00:04:26.104 "copy": true, 00:04:26.104 "nvme_iov_md": false 00:04:26.104 }, 00:04:26.104 "memory_domains": [ 00:04:26.104 { 00:04:26.104 "dma_device_id": "system", 00:04:26.104 "dma_device_type": 1 00:04:26.104 }, 00:04:26.104 { 00:04:26.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.104 "dma_device_type": 2 00:04:26.104 } 00:04:26.104 ], 00:04:26.104 "driver_specific": {} 00:04:26.104 } 00:04:26.104 ]' 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:26.104 04:53:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:26.104 00:04:26.104 real 0m0.119s 00:04:26.104 user 0m0.074s 00:04:26.104 sys 0m0.012s 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.104 04:53:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.104 ************************************ 00:04:26.104 END TEST rpc_plugins 00:04:26.104 ************************************ 00:04:26.104 04:53:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.104 04:53:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.104 04:53:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.104 04:53:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.104 04:53:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.104 ************************************ 00:04:26.104 START TEST rpc_trace_cmd_test 00:04:26.104 ************************************ 00:04:26.104 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:26.104 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.104 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.104 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.104 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.362 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid551268", 00:04:26.362 "tpoint_group_mask": "0x8", 00:04:26.362 "iscsi_conn": { 00:04:26.362 "mask": "0x2", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "scsi": { 00:04:26.362 "mask": "0x4", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "bdev": { 00:04:26.362 "mask": "0x8", 00:04:26.362 "tpoint_mask": "0xffffffffffffffff" 00:04:26.362 }, 00:04:26.362 "nvmf_rdma": { 00:04:26.362 "mask": "0x10", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "nvmf_tcp": { 00:04:26.362 "mask": "0x20", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "ftl": { 00:04:26.362 "mask": "0x40", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "blobfs": { 00:04:26.362 "mask": "0x80", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "dsa": { 00:04:26.362 "mask": "0x200", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "thread": { 00:04:26.362 "mask": "0x400", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "nvme_pcie": { 00:04:26.362 "mask": "0x800", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "iaa": { 00:04:26.362 "mask": "0x1000", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "nvme_tcp": { 00:04:26.362 "mask": "0x2000", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "bdev_nvme": { 00:04:26.362 "mask": "0x4000", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 }, 00:04:26.362 "sock": { 00:04:26.362 "mask": "0x8000", 00:04:26.362 "tpoint_mask": "0x0" 00:04:26.362 } 00:04:26.362 }' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.362 00:04:26.362 real 0m0.198s 00:04:26.362 user 0m0.171s 00:04:26.362 sys 0m0.019s 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.362 04:53:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 ************************************ 00:04:26.362 END TEST rpc_trace_cmd_test 00:04:26.362 ************************************ 00:04:26.362 04:53:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.362 04:53:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.362 04:53:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.362 04:53:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.362 04:53:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.362 04:53:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.362 04:53:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 ************************************ 00:04:26.362 START TEST rpc_daemon_integrity 00:04:26.362 ************************************ 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.362 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.620 { 00:04:26.620 "name": "Malloc2", 00:04:26.620 "aliases": [ 00:04:26.620 "c554c120-e252-4902-a8c5-50e1f0c89db7" 00:04:26.620 ], 00:04:26.620 "product_name": "Malloc disk", 00:04:26.620 "block_size": 512, 00:04:26.620 "num_blocks": 16384, 00:04:26.620 "uuid": "c554c120-e252-4902-a8c5-50e1f0c89db7", 00:04:26.620 "assigned_rate_limits": { 00:04:26.620 "rw_ios_per_sec": 0, 00:04:26.620 "rw_mbytes_per_sec": 0, 00:04:26.620 "r_mbytes_per_sec": 0, 00:04:26.620 "w_mbytes_per_sec": 0 00:04:26.620 }, 00:04:26.620 "claimed": false, 00:04:26.620 "zoned": false, 00:04:26.620 "supported_io_types": { 00:04:26.620 "read": true, 00:04:26.620 "write": true, 00:04:26.620 "unmap": true, 00:04:26.620 "flush": true, 00:04:26.620 "reset": true, 00:04:26.620 "nvme_admin": false, 00:04:26.620 "nvme_io": false, 00:04:26.620 "nvme_io_md": false, 00:04:26.620 "write_zeroes": true, 00:04:26.620 "zcopy": true, 00:04:26.620 "get_zone_info": false, 00:04:26.620 "zone_management": false, 00:04:26.620 "zone_append": false, 00:04:26.620 "compare": false, 00:04:26.620 "compare_and_write": false, 00:04:26.620 "abort": true, 00:04:26.620 "seek_hole": false, 00:04:26.620 "seek_data": false, 00:04:26.620 "copy": true, 00:04:26.620 "nvme_iov_md": false 00:04:26.620 }, 00:04:26.620 "memory_domains": [ 00:04:26.620 { 00:04:26.620 "dma_device_id": "system", 00:04:26.620 "dma_device_type": 1 00:04:26.620 }, 00:04:26.620 { 00:04:26.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.620 "dma_device_type": 2 00:04:26.620 } 00:04:26.620 ], 00:04:26.620 "driver_specific": {} 00:04:26.620 } 00:04:26.620 ]' 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.620 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.620 [2024-07-13 04:53:32.965228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.620 [2024-07-13 04:53:32.965300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.620 [2024-07-13 04:53:32.965340] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:26.620 [2024-07-13 04:53:32.965370] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.620 [2024-07-13 04:53:32.968044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.620 [2024-07-13 04:53:32.968081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.621 Passthru0 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.621 { 00:04:26.621 "name": "Malloc2", 00:04:26.621 "aliases": [ 00:04:26.621 "c554c120-e252-4902-a8c5-50e1f0c89db7" 00:04:26.621 ], 00:04:26.621 "product_name": "Malloc disk", 00:04:26.621 "block_size": 512, 00:04:26.621 "num_blocks": 16384, 00:04:26.621 "uuid": "c554c120-e252-4902-a8c5-50e1f0c89db7", 00:04:26.621 "assigned_rate_limits": { 00:04:26.621 "rw_ios_per_sec": 0, 00:04:26.621 "rw_mbytes_per_sec": 0, 00:04:26.621 "r_mbytes_per_sec": 0, 00:04:26.621 "w_mbytes_per_sec": 0 00:04:26.621 }, 00:04:26.621 "claimed": true, 00:04:26.621 "claim_type": "exclusive_write", 00:04:26.621 "zoned": false, 00:04:26.621 "supported_io_types": { 00:04:26.621 "read": true, 00:04:26.621 "write": true, 00:04:26.621 "unmap": true, 00:04:26.621 "flush": true, 00:04:26.621 "reset": true, 00:04:26.621 "nvme_admin": false, 00:04:26.621 "nvme_io": false, 00:04:26.621 "nvme_io_md": false, 00:04:26.621 "write_zeroes": true, 00:04:26.621 "zcopy": true, 00:04:26.621 "get_zone_info": false, 00:04:26.621 "zone_management": false, 00:04:26.621 "zone_append": false, 00:04:26.621 "compare": false, 00:04:26.621 "compare_and_write": false, 00:04:26.621 "abort": true, 00:04:26.621 "seek_hole": false, 00:04:26.621 "seek_data": false, 00:04:26.621 "copy": true, 00:04:26.621 "nvme_iov_md": false 00:04:26.621 }, 00:04:26.621 "memory_domains": [ 00:04:26.621 { 00:04:26.621 "dma_device_id": "system", 00:04:26.621 "dma_device_type": 1 00:04:26.621 }, 00:04:26.621 { 00:04:26.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.621 "dma_device_type": 2 00:04:26.621 } 00:04:26.621 ], 00:04:26.621 "driver_specific": {} 00:04:26.621 }, 00:04:26.621 { 00:04:26.621 "name": "Passthru0", 00:04:26.621 "aliases": [ 00:04:26.621 "087d5402-92d6-5377-95cd-8a2c41c611e3" 00:04:26.621 ], 00:04:26.621 "product_name": "passthru", 00:04:26.621 "block_size": 512, 00:04:26.621 "num_blocks": 16384, 00:04:26.621 "uuid": "087d5402-92d6-5377-95cd-8a2c41c611e3", 00:04:26.621 "assigned_rate_limits": { 00:04:26.621 "rw_ios_per_sec": 0, 00:04:26.621 "rw_mbytes_per_sec": 0, 00:04:26.621 "r_mbytes_per_sec": 0, 00:04:26.621 "w_mbytes_per_sec": 0 00:04:26.621 }, 00:04:26.621 "claimed": false, 00:04:26.621 "zoned": false, 00:04:26.621 "supported_io_types": { 00:04:26.621 "read": true, 00:04:26.621 "write": true, 00:04:26.621 "unmap": true, 00:04:26.621 "flush": true, 00:04:26.621 "reset": true, 00:04:26.621 "nvme_admin": false, 00:04:26.621 "nvme_io": false, 00:04:26.621 "nvme_io_md": false, 00:04:26.621 "write_zeroes": true, 00:04:26.621 "zcopy": true, 00:04:26.621 "get_zone_info": false, 00:04:26.621 "zone_management": false, 00:04:26.621 "zone_append": false, 00:04:26.621 "compare": false, 00:04:26.621 "compare_and_write": false, 00:04:26.621 "abort": true, 00:04:26.621 "seek_hole": false, 00:04:26.621 "seek_data": false, 00:04:26.621 "copy": true, 00:04:26.621 "nvme_iov_md": false 00:04:26.621 }, 00:04:26.621 "memory_domains": [ 00:04:26.621 { 00:04:26.621 "dma_device_id": "system", 00:04:26.621 "dma_device_type": 1 00:04:26.621 }, 00:04:26.621 { 00:04:26.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.621 "dma_device_type": 2 00:04:26.621 } 00:04:26.621 ], 00:04:26.621 "driver_specific": { 00:04:26.621 "passthru": { 00:04:26.621 "name": "Passthru0", 00:04:26.621 "base_bdev_name": "Malloc2" 00:04:26.621 } 00:04:26.621 } 00:04:26.621 } 00:04:26.621 ]' 00:04:26.621 04:53:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.621 00:04:26.621 real 0m0.262s 00:04:26.621 user 0m0.150s 00:04:26.621 sys 0m0.026s 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.621 04:53:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 ************************************ 00:04:26.621 END TEST rpc_daemon_integrity 00:04:26.621 ************************************ 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.878 04:53:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.878 04:53:33 rpc -- rpc/rpc.sh@84 -- # killprocess 551268 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@948 -- # '[' -z 551268 ']' 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@952 -- # kill -0 551268 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 551268 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 551268' 00:04:26.878 killing process with pid 551268 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@967 -- # kill 551268 00:04:26.878 04:53:33 rpc -- common/autotest_common.sh@972 -- # wait 551268 00:04:29.403 00:04:29.403 real 0m4.928s 00:04:29.403 user 0m5.463s 00:04:29.403 sys 0m0.772s 00:04:29.403 04:53:35 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.403 04:53:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.403 ************************************ 00:04:29.403 END TEST rpc 00:04:29.403 ************************************ 00:04:29.403 04:53:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.403 04:53:35 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.403 04:53:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.403 04:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.403 04:53:35 -- common/autotest_common.sh@10 -- # set +x 00:04:29.403 ************************************ 00:04:29.403 START TEST skip_rpc 00:04:29.403 ************************************ 00:04:29.403 04:53:35 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.403 * Looking for test storage... 00:04:29.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.403 04:53:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.403 04:53:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.403 04:53:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.403 04:53:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.403 04:53:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.403 04:53:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.403 ************************************ 00:04:29.403 START TEST skip_rpc 00:04:29.403 ************************************ 00:04:29.403 04:53:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:29.403 04:53:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=551998 00:04:29.403 04:53:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.403 04:53:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.403 04:53:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.403 [2024-07-13 04:53:35.866583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:29.403 [2024-07-13 04:53:35.866734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551998 ] 00:04:29.660 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.660 [2024-07-13 04:53:35.995262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.917 [2024-07-13 04:53:36.249860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 551998 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 551998 ']' 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 551998 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 551998 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 551998' 00:04:35.175 killing process with pid 551998 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 551998 00:04:35.175 04:53:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 551998 00:04:37.072 00:04:37.072 real 0m7.512s 00:04:37.072 user 0m7.013s 00:04:37.072 sys 0m0.485s 00:04:37.072 04:53:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.072 04:53:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.072 ************************************ 00:04:37.072 END TEST skip_rpc 00:04:37.072 ************************************ 00:04:37.072 04:53:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.072 04:53:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.072 04:53:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.072 04:53:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.072 04:53:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.072 ************************************ 00:04:37.073 START TEST skip_rpc_with_json 00:04:37.073 ************************************ 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=552950 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 552950 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 552950 ']' 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.073 04:53:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.073 [2024-07-13 04:53:43.433483] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:37.073 [2024-07-13 04:53:43.433645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552950 ] 00:04:37.073 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.331 [2024-07-13 04:53:43.573754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.331 [2024-07-13 04:53:43.829444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.263 [2024-07-13 04:53:44.682093] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:38.263 request: 00:04:38.263 { 00:04:38.263 "trtype": "tcp", 00:04:38.263 "method": "nvmf_get_transports", 00:04:38.263 "req_id": 1 00:04:38.263 } 00:04:38.263 Got JSON-RPC error response 00:04:38.263 response: 00:04:38.263 { 00:04:38.263 "code": -19, 00:04:38.263 "message": "No such device" 00:04:38.263 } 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.263 [2024-07-13 04:53:44.690237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.263 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.522 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.522 { 00:04:38.522 "subsystems": [ 00:04:38.522 { 00:04:38.522 "subsystem": "keyring", 00:04:38.522 "config": [] 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "subsystem": "iobuf", 00:04:38.522 "config": [ 00:04:38.522 { 00:04:38.522 "method": "iobuf_set_options", 00:04:38.522 "params": { 00:04:38.522 "small_pool_count": 8192, 00:04:38.522 "large_pool_count": 1024, 00:04:38.522 "small_bufsize": 8192, 00:04:38.522 "large_bufsize": 135168 00:04:38.522 } 00:04:38.522 } 00:04:38.522 ] 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "subsystem": "sock", 00:04:38.522 "config": [ 00:04:38.522 { 00:04:38.522 "method": "sock_set_default_impl", 00:04:38.522 "params": { 00:04:38.522 "impl_name": "posix" 00:04:38.522 } 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "method": "sock_impl_set_options", 00:04:38.522 "params": { 00:04:38.522 "impl_name": "ssl", 00:04:38.522 "recv_buf_size": 4096, 00:04:38.522 "send_buf_size": 4096, 00:04:38.522 "enable_recv_pipe": true, 00:04:38.522 "enable_quickack": false, 00:04:38.522 "enable_placement_id": 0, 00:04:38.522 "enable_zerocopy_send_server": true, 00:04:38.522 "enable_zerocopy_send_client": false, 00:04:38.522 "zerocopy_threshold": 0, 00:04:38.522 "tls_version": 0, 00:04:38.522 "enable_ktls": false 00:04:38.522 } 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "method": "sock_impl_set_options", 00:04:38.522 "params": { 00:04:38.522 "impl_name": "posix", 00:04:38.522 "recv_buf_size": 2097152, 00:04:38.522 "send_buf_size": 2097152, 00:04:38.522 "enable_recv_pipe": true, 00:04:38.522 "enable_quickack": false, 00:04:38.522 "enable_placement_id": 0, 00:04:38.522 "enable_zerocopy_send_server": true, 00:04:38.522 "enable_zerocopy_send_client": false, 00:04:38.522 "zerocopy_threshold": 0, 00:04:38.522 "tls_version": 0, 00:04:38.522 "enable_ktls": false 00:04:38.522 } 00:04:38.522 } 00:04:38.522 ] 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "subsystem": "vmd", 00:04:38.522 "config": [] 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "subsystem": "accel", 00:04:38.522 "config": [ 00:04:38.522 { 00:04:38.522 "method": "accel_set_options", 00:04:38.522 "params": { 00:04:38.522 "small_cache_size": 128, 00:04:38.522 "large_cache_size": 16, 00:04:38.522 "task_count": 2048, 00:04:38.522 "sequence_count": 2048, 00:04:38.522 "buf_count": 2048 00:04:38.522 } 00:04:38.522 } 00:04:38.522 ] 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "subsystem": "bdev", 00:04:38.522 "config": [ 00:04:38.522 { 00:04:38.522 "method": "bdev_set_options", 00:04:38.522 "params": { 00:04:38.522 "bdev_io_pool_size": 65535, 00:04:38.522 "bdev_io_cache_size": 256, 00:04:38.522 "bdev_auto_examine": true, 00:04:38.522 "iobuf_small_cache_size": 128, 00:04:38.522 "iobuf_large_cache_size": 16 00:04:38.522 } 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "method": "bdev_raid_set_options", 00:04:38.522 "params": { 00:04:38.522 "process_window_size_kb": 1024 00:04:38.522 } 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "method": "bdev_iscsi_set_options", 00:04:38.522 "params": { 00:04:38.522 "timeout_sec": 30 00:04:38.522 } 00:04:38.522 }, 00:04:38.522 { 00:04:38.522 "method": "bdev_nvme_set_options", 00:04:38.522 "params": { 00:04:38.522 "action_on_timeout": "none", 00:04:38.522 "timeout_us": 0, 00:04:38.522 "timeout_admin_us": 0, 00:04:38.522 "keep_alive_timeout_ms": 10000, 00:04:38.522 "arbitration_burst": 0, 00:04:38.522 "low_priority_weight": 0, 00:04:38.522 "medium_priority_weight": 0, 00:04:38.522 "high_priority_weight": 0, 00:04:38.522 "nvme_adminq_poll_period_us": 10000, 00:04:38.522 "nvme_ioq_poll_period_us": 0, 00:04:38.522 "io_queue_requests": 0, 00:04:38.522 "delay_cmd_submit": true, 00:04:38.522 "transport_retry_count": 4, 00:04:38.522 "bdev_retry_count": 3, 00:04:38.522 "transport_ack_timeout": 0, 00:04:38.522 "ctrlr_loss_timeout_sec": 0, 00:04:38.523 "reconnect_delay_sec": 0, 00:04:38.523 "fast_io_fail_timeout_sec": 0, 00:04:38.523 "disable_auto_failback": false, 00:04:38.523 "generate_uuids": false, 00:04:38.523 "transport_tos": 0, 00:04:38.523 "nvme_error_stat": false, 00:04:38.523 "rdma_srq_size": 0, 00:04:38.523 "io_path_stat": false, 00:04:38.523 "allow_accel_sequence": false, 00:04:38.523 "rdma_max_cq_size": 0, 00:04:38.523 "rdma_cm_event_timeout_ms": 0, 00:04:38.523 "dhchap_digests": [ 00:04:38.523 "sha256", 00:04:38.523 "sha384", 00:04:38.523 "sha512" 00:04:38.523 ], 00:04:38.523 "dhchap_dhgroups": [ 00:04:38.523 "null", 00:04:38.523 "ffdhe2048", 00:04:38.523 "ffdhe3072", 00:04:38.523 "ffdhe4096", 00:04:38.523 "ffdhe6144", 00:04:38.523 "ffdhe8192" 00:04:38.523 ] 00:04:38.523 } 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "method": "bdev_nvme_set_hotplug", 00:04:38.523 "params": { 00:04:38.523 "period_us": 100000, 00:04:38.523 "enable": false 00:04:38.523 } 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "method": "bdev_wait_for_examine" 00:04:38.523 } 00:04:38.523 ] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "scsi", 00:04:38.523 "config": null 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "scheduler", 00:04:38.523 "config": [ 00:04:38.523 { 00:04:38.523 "method": "framework_set_scheduler", 00:04:38.523 "params": { 00:04:38.523 "name": "static" 00:04:38.523 } 00:04:38.523 } 00:04:38.523 ] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "vhost_scsi", 00:04:38.523 "config": [] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "vhost_blk", 00:04:38.523 "config": [] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "ublk", 00:04:38.523 "config": [] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "nbd", 00:04:38.523 "config": [] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "nvmf", 00:04:38.523 "config": [ 00:04:38.523 { 00:04:38.523 "method": "nvmf_set_config", 00:04:38.523 "params": { 00:04:38.523 "discovery_filter": "match_any", 00:04:38.523 "admin_cmd_passthru": { 00:04:38.523 "identify_ctrlr": false 00:04:38.523 } 00:04:38.523 } 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "method": "nvmf_set_max_subsystems", 00:04:38.523 "params": { 00:04:38.523 "max_subsystems": 1024 00:04:38.523 } 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "method": "nvmf_set_crdt", 00:04:38.523 "params": { 00:04:38.523 "crdt1": 0, 00:04:38.523 "crdt2": 0, 00:04:38.523 "crdt3": 0 00:04:38.523 } 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "method": "nvmf_create_transport", 00:04:38.523 "params": { 00:04:38.523 "trtype": "TCP", 00:04:38.523 "max_queue_depth": 128, 00:04:38.523 "max_io_qpairs_per_ctrlr": 127, 00:04:38.523 "in_capsule_data_size": 4096, 00:04:38.523 "max_io_size": 131072, 00:04:38.523 "io_unit_size": 131072, 00:04:38.523 "max_aq_depth": 128, 00:04:38.523 "num_shared_buffers": 511, 00:04:38.523 "buf_cache_size": 4294967295, 00:04:38.523 "dif_insert_or_strip": false, 00:04:38.523 "zcopy": false, 00:04:38.523 "c2h_success": true, 00:04:38.523 "sock_priority": 0, 00:04:38.523 "abort_timeout_sec": 1, 00:04:38.523 "ack_timeout": 0, 00:04:38.523 "data_wr_pool_size": 0 00:04:38.523 } 00:04:38.523 } 00:04:38.523 ] 00:04:38.523 }, 00:04:38.523 { 00:04:38.523 "subsystem": "iscsi", 00:04:38.523 "config": [ 00:04:38.523 { 00:04:38.523 "method": "iscsi_set_options", 00:04:38.523 "params": { 00:04:38.523 "node_base": "iqn.2016-06.io.spdk", 00:04:38.523 "max_sessions": 128, 00:04:38.523 "max_connections_per_session": 2, 00:04:38.523 "max_queue_depth": 64, 00:04:38.523 "default_time2wait": 2, 00:04:38.523 "default_time2retain": 20, 00:04:38.523 "first_burst_length": 8192, 00:04:38.523 "immediate_data": true, 00:04:38.523 "allow_duplicated_isid": false, 00:04:38.523 "error_recovery_level": 0, 00:04:38.523 "nop_timeout": 60, 00:04:38.523 "nop_in_interval": 30, 00:04:38.523 "disable_chap": false, 00:04:38.523 "require_chap": false, 00:04:38.523 "mutual_chap": false, 00:04:38.523 "chap_group": 0, 00:04:38.523 "max_large_datain_per_connection": 64, 00:04:38.523 "max_r2t_per_connection": 4, 00:04:38.523 "pdu_pool_size": 36864, 00:04:38.523 "immediate_data_pool_size": 16384, 00:04:38.523 "data_out_pool_size": 2048 00:04:38.523 } 00:04:38.523 } 00:04:38.523 ] 00:04:38.523 } 00:04:38.523 ] 00:04:38.523 } 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 552950 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 552950 ']' 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 552950 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 552950 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 552950' 00:04:38.523 killing process with pid 552950 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 552950 00:04:38.523 04:53:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 552950 00:04:41.052 04:53:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=553369 00:04:41.052 04:53:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.052 04:53:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 553369 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 553369 ']' 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 553369 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 553369 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 553369' 00:04:46.371 killing process with pid 553369 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 553369 00:04:46.371 04:53:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 553369 00:04:48.898 04:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.898 04:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.898 00:04:48.898 real 0m11.546s 00:04:48.898 user 0m10.980s 00:04:48.898 sys 0m1.051s 00:04:48.898 04:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.898 04:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.898 ************************************ 00:04:48.898 END TEST skip_rpc_with_json 00:04:48.898 ************************************ 00:04:48.898 04:53:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.898 04:53:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.898 04:53:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.898 04:53:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.898 04:53:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.898 ************************************ 00:04:48.899 START TEST skip_rpc_with_delay 00:04:48.899 ************************************ 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:48.899 04:53:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.899 [2024-07-13 04:53:55.015706] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.899 [2024-07-13 04:53:55.015909] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:48.899 00:04:48.899 real 0m0.139s 00:04:48.899 user 0m0.082s 00:04:48.899 sys 0m0.056s 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.899 04:53:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.899 ************************************ 00:04:48.899 END TEST skip_rpc_with_delay 00:04:48.899 ************************************ 00:04:48.899 04:53:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.899 04:53:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.899 04:53:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.899 04:53:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.899 04:53:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.899 04:53:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.899 04:53:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.899 ************************************ 00:04:48.899 START TEST exit_on_failed_rpc_init 00:04:48.899 ************************************ 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=554348 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 554348 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 554348 ']' 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.899 04:53:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.899 [2024-07-13 04:53:55.205368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:48.899 [2024-07-13 04:53:55.205502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554348 ] 00:04:48.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.899 [2024-07-13 04:53:55.329648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.158 [2024-07-13 04:53:55.583245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.092 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.093 04:53:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.093 [2024-07-13 04:53:56.564494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:50.093 [2024-07-13 04:53:56.564636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554607 ] 00:04:50.351 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.351 [2024-07-13 04:53:56.696101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.609 [2024-07-13 04:53:56.950415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.609 [2024-07-13 04:53:56.950580] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.609 [2024-07-13 04:53:56.950627] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.609 [2024-07-13 04:53:56.950652] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 554348 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 554348 ']' 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 554348 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 554348 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 554348' 00:04:51.176 killing process with pid 554348 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 554348 00:04:51.176 04:53:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 554348 00:04:53.709 00:04:53.709 real 0m4.822s 00:04:53.709 user 0m5.520s 00:04:53.709 sys 0m0.729s 00:04:53.709 04:53:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.709 04:53:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.709 ************************************ 00:04:53.709 END TEST exit_on_failed_rpc_init 00:04:53.709 ************************************ 00:04:53.709 04:53:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.709 04:53:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.709 00:04:53.709 real 0m24.254s 00:04:53.709 user 0m23.689s 00:04:53.709 sys 0m2.479s 00:04:53.709 04:53:59 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.709 04:53:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.709 ************************************ 00:04:53.709 END TEST skip_rpc 00:04:53.709 ************************************ 00:04:53.709 04:53:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.709 04:53:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.709 04:53:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.709 04:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.709 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.709 ************************************ 00:04:53.709 START TEST rpc_client 00:04:53.709 ************************************ 00:04:53.709 04:54:00 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.709 * Looking for test storage... 00:04:53.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.709 04:54:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.709 OK 00:04:53.709 04:54:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.709 00:04:53.709 real 0m0.095s 00:04:53.709 user 0m0.052s 00:04:53.709 sys 0m0.049s 00:04:53.709 04:54:00 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.709 04:54:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.709 ************************************ 00:04:53.709 END TEST rpc_client 00:04:53.709 ************************************ 00:04:53.709 04:54:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.709 04:54:00 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.709 04:54:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.709 04:54:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.709 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.709 ************************************ 00:04:53.709 START TEST json_config 00:04:53.709 ************************************ 00:04:53.709 04:54:00 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.709 04:54:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.709 04:54:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.709 04:54:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.709 04:54:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.710 04:54:00 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.710 04:54:00 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.710 04:54:00 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.710 04:54:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.710 04:54:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.710 04:54:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.710 04:54:00 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.710 04:54:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.710 04:54:00 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:53.710 INFO: JSON configuration test init 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.710 04:54:00 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.710 04:54:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.710 04:54:00 json_config -- json_config/common.sh@10 -- # shift 00:04:53.710 04:54:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.710 04:54:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.710 04:54:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.710 04:54:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.710 04:54:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.710 04:54:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=555153 00:04:53.710 04:54:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.710 Waiting for target to run... 00:04:53.710 04:54:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.710 04:54:00 json_config -- json_config/common.sh@25 -- # waitforlisten 555153 /var/tmp/spdk_tgt.sock 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 555153 ']' 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.710 04:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.969 [2024-07-13 04:54:00.301690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:53.969 [2024-07-13 04:54:00.301857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555153 ] 00:04:53.969 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.535 [2024-07-13 04:54:00.729991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.535 [2024-07-13 04:54:00.959155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:54.793 04:54:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.793 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.793 04:54:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.793 04:54:01 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:54.793 04:54:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:58.978 04:54:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.978 04:54:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:58.978 04:54:05 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.978 04:54:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.236 MallocForNvmf0 00:04:59.236 04:54:05 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.236 04:54:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.494 MallocForNvmf1 00:04:59.494 04:54:05 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.494 04:54:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.752 [2024-07-13 04:54:06.016574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.752 04:54:06 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.752 04:54:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.010 04:54:06 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.010 04:54:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.269 04:54:06 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.269 04:54:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.527 04:54:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.527 04:54:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.527 [2024-07-13 04:54:06.999986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.527 04:54:07 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:00.527 04:54:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.527 04:54:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.786 04:54:07 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:00.786 04:54:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.786 04:54:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.786 04:54:07 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:00.786 04:54:07 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.786 04:54:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.044 MallocBdevForConfigChangeCheck 00:05:01.044 04:54:07 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:01.044 04:54:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.044 04:54:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.044 04:54:07 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:01.044 04:54:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.302 04:54:07 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:01.302 INFO: shutting down applications... 00:05:01.302 04:54:07 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:01.302 04:54:07 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:01.302 04:54:07 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:01.302 04:54:07 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:03.200 Calling clear_iscsi_subsystem 00:05:03.200 Calling clear_nvmf_subsystem 00:05:03.200 Calling clear_nbd_subsystem 00:05:03.200 Calling clear_ublk_subsystem 00:05:03.200 Calling clear_vhost_blk_subsystem 00:05:03.200 Calling clear_vhost_scsi_subsystem 00:05:03.200 Calling clear_bdev_subsystem 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:03.200 04:54:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.457 04:54:09 json_config -- json_config/json_config.sh@345 -- # break 00:05:03.457 04:54:09 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:03.457 04:54:09 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:03.457 04:54:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.457 04:54:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.457 04:54:09 json_config -- json_config/common.sh@35 -- # [[ -n 555153 ]] 00:05:03.457 04:54:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 555153 00:05:03.457 04:54:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.457 04:54:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.457 04:54:09 json_config -- json_config/common.sh@41 -- # kill -0 555153 00:05:03.457 04:54:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.022 04:54:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.022 04:54:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.022 04:54:10 json_config -- json_config/common.sh@41 -- # kill -0 555153 00:05:04.022 04:54:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.279 04:54:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.279 04:54:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.279 04:54:10 json_config -- json_config/common.sh@41 -- # kill -0 555153 00:05:04.279 04:54:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.844 04:54:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.844 04:54:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.844 04:54:11 json_config -- json_config/common.sh@41 -- # kill -0 555153 00:05:04.844 04:54:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.844 04:54:11 json_config -- json_config/common.sh@43 -- # break 00:05:04.844 04:54:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.844 04:54:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.844 SPDK target shutdown done 00:05:04.844 04:54:11 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:04.844 INFO: relaunching applications... 00:05:04.844 04:54:11 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.844 04:54:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.844 04:54:11 json_config -- json_config/common.sh@10 -- # shift 00:05:04.844 04:54:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.844 04:54:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.844 04:54:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.844 04:54:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.844 04:54:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.844 04:54:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=557197 00:05:04.844 04:54:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.844 04:54:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.844 Waiting for target to run... 00:05:04.844 04:54:11 json_config -- json_config/common.sh@25 -- # waitforlisten 557197 /var/tmp/spdk_tgt.sock 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@829 -- # '[' -z 557197 ']' 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.844 04:54:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.844 [2024-07-13 04:54:11.326447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.844 [2024-07-13 04:54:11.326620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557197 ] 00:05:05.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.668 [2024-07-13 04:54:11.927109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.668 [2024-07-13 04:54:12.163536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.849 [2024-07-13 04:54:15.881684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.849 [2024-07-13 04:54:15.914239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.107 04:54:16 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.107 04:54:16 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:10.107 04:54:16 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.107 00:05:10.107 04:54:16 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:10.107 04:54:16 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.107 INFO: Checking if target configuration is the same... 00:05:10.107 04:54:16 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.107 04:54:16 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:10.107 04:54:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.107 + '[' 2 -ne 2 ']' 00:05:10.107 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.107 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.107 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.107 +++ basename /dev/fd/62 00:05:10.107 ++ mktemp /tmp/62.XXX 00:05:10.107 + tmp_file_1=/tmp/62.jEH 00:05:10.107 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.107 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.107 + tmp_file_2=/tmp/spdk_tgt_config.json.kor 00:05:10.107 + ret=0 00:05:10.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.365 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.365 + diff -u /tmp/62.jEH /tmp/spdk_tgt_config.json.kor 00:05:10.365 + echo 'INFO: JSON config files are the same' 00:05:10.365 INFO: JSON config files are the same 00:05:10.365 + rm /tmp/62.jEH /tmp/spdk_tgt_config.json.kor 00:05:10.365 + exit 0 00:05:10.365 04:54:16 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:10.365 04:54:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.365 INFO: changing configuration and checking if this can be detected... 00:05:10.365 04:54:16 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.365 04:54:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.623 04:54:17 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.623 04:54:17 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:10.623 04:54:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.623 + '[' 2 -ne 2 ']' 00:05:10.623 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.623 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.623 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.623 +++ basename /dev/fd/62 00:05:10.623 ++ mktemp /tmp/62.XXX 00:05:10.881 + tmp_file_1=/tmp/62.yUE 00:05:10.881 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.881 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.881 + tmp_file_2=/tmp/spdk_tgt_config.json.fjC 00:05:10.881 + ret=0 00:05:10.881 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.139 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.139 + diff -u /tmp/62.yUE /tmp/spdk_tgt_config.json.fjC 00:05:11.139 + ret=1 00:05:11.139 + echo '=== Start of file: /tmp/62.yUE ===' 00:05:11.139 + cat /tmp/62.yUE 00:05:11.139 + echo '=== End of file: /tmp/62.yUE ===' 00:05:11.139 + echo '' 00:05:11.139 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fjC ===' 00:05:11.139 + cat /tmp/spdk_tgt_config.json.fjC 00:05:11.139 + echo '=== End of file: /tmp/spdk_tgt_config.json.fjC ===' 00:05:11.139 + echo '' 00:05:11.139 + rm /tmp/62.yUE /tmp/spdk_tgt_config.json.fjC 00:05:11.139 + exit 1 00:05:11.139 04:54:17 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:11.139 INFO: configuration change detected. 00:05:11.139 04:54:17 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@317 -- # [[ -n 557197 ]] 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.140 04:54:17 json_config -- json_config/json_config.sh@323 -- # killprocess 557197 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@948 -- # '[' -z 557197 ']' 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@952 -- # kill -0 557197 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@953 -- # uname 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 557197 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 557197' 00:05:11.140 killing process with pid 557197 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@967 -- # kill 557197 00:05:11.140 04:54:17 json_config -- common/autotest_common.sh@972 -- # wait 557197 00:05:13.669 04:54:20 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.669 04:54:20 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:13.669 04:54:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.669 04:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 04:54:20 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:13.669 04:54:20 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:13.669 INFO: Success 00:05:13.669 00:05:13.669 real 0m19.882s 00:05:13.669 user 0m21.401s 00:05:13.669 sys 0m2.489s 00:05:13.669 04:54:20 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.669 04:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 ************************************ 00:05:13.669 END TEST json_config 00:05:13.669 ************************************ 00:05:13.669 04:54:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.669 04:54:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.669 04:54:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.669 04:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.669 04:54:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 ************************************ 00:05:13.669 START TEST json_config_extra_key 00:05:13.669 ************************************ 00:05:13.669 04:54:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.669 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.669 04:54:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.669 04:54:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.669 04:54:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.669 04:54:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.669 04:54:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.669 04:54:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.669 04:54:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.669 04:54:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:13.669 04:54:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:13.669 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.669 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.670 INFO: launching applications... 00:05:13.670 04:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=558379 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.670 Waiting for target to run... 00:05:13.670 04:54:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 558379 /var/tmp/spdk_tgt.sock 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 558379 ']' 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.670 04:54:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.928 [2024-07-13 04:54:20.212689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:13.928 [2024-07-13 04:54:20.212837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558379 ] 00:05:13.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.186 [2024-07-13 04:54:20.633727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.444 [2024-07-13 04:54:20.858952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.375 04:54:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.375 04:54:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.375 00:05:15.375 04:54:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.375 INFO: shutting down applications... 00:05:15.375 04:54:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 558379 ]] 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 558379 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:15.375 04:54:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.632 04:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.632 04:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.632 04:54:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:15.632 04:54:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.198 04:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.198 04:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.198 04:54:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:16.198 04:54:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.764 04:54:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.764 04:54:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.764 04:54:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:16.764 04:54:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.332 04:54:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.332 04:54:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.332 04:54:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:17.332 04:54:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.590 04:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.590 04:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.590 04:54:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:17.590 04:54:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 558379 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.155 04:54:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.155 SPDK target shutdown done 00:05:18.155 04:54:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:18.155 Success 00:05:18.155 00:05:18.155 real 0m4.472s 00:05:18.155 user 0m4.168s 00:05:18.155 sys 0m0.625s 00:05:18.155 04:54:24 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.155 04:54:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.155 ************************************ 00:05:18.155 END TEST json_config_extra_key 00:05:18.155 ************************************ 00:05:18.155 04:54:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.155 04:54:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.155 04:54:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.155 04:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.155 04:54:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.155 ************************************ 00:05:18.155 START TEST alias_rpc 00:05:18.155 ************************************ 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.155 * Looking for test storage... 00:05:18.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:18.155 04:54:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.155 04:54:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=558962 00:05:18.155 04:54:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.155 04:54:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 558962 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 558962 ']' 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.155 04:54:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.413 [2024-07-13 04:54:24.736728] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:18.413 [2024-07-13 04:54:24.736906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558962 ] 00:05:18.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.413 [2024-07-13 04:54:24.857724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.671 [2024-07-13 04:54:25.108495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.605 04:54:25 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.605 04:54:25 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.605 04:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:19.864 04:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 558962 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 558962 ']' 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 558962 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 558962 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 558962' 00:05:19.864 killing process with pid 558962 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@967 -- # kill 558962 00:05:19.864 04:54:26 alias_rpc -- common/autotest_common.sh@972 -- # wait 558962 00:05:22.392 00:05:22.392 real 0m4.159s 00:05:22.392 user 0m4.274s 00:05:22.392 sys 0m0.611s 00:05:22.392 04:54:28 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.392 04:54:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.392 ************************************ 00:05:22.392 END TEST alias_rpc 00:05:22.392 ************************************ 00:05:22.392 04:54:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.392 04:54:28 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:22.392 04:54:28 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.392 04:54:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.392 04:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.392 04:54:28 -- common/autotest_common.sh@10 -- # set +x 00:05:22.392 ************************************ 00:05:22.392 START TEST spdkcli_tcp 00:05:22.392 ************************************ 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.392 * Looking for test storage... 00:05:22.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=559433 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.392 04:54:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 559433 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 559433 ']' 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.392 04:54:28 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.393 04:54:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.393 04:54:28 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.393 04:54:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 [2024-07-13 04:54:28.958972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.666 [2024-07-13 04:54:28.959119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559433 ] 00:05:22.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.667 [2024-07-13 04:54:29.085399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.925 [2024-07-13 04:54:29.339172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.925 [2024-07-13 04:54:29.339179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.858 04:54:30 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.858 04:54:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:23.858 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=559686 00:05:23.858 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.858 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.115 [ 00:05:24.115 "bdev_malloc_delete", 00:05:24.115 "bdev_malloc_create", 00:05:24.115 "bdev_null_resize", 00:05:24.115 "bdev_null_delete", 00:05:24.115 "bdev_null_create", 00:05:24.115 "bdev_nvme_cuse_unregister", 00:05:24.115 "bdev_nvme_cuse_register", 00:05:24.115 "bdev_opal_new_user", 00:05:24.115 "bdev_opal_set_lock_state", 00:05:24.115 "bdev_opal_delete", 00:05:24.115 "bdev_opal_get_info", 00:05:24.115 "bdev_opal_create", 00:05:24.115 "bdev_nvme_opal_revert", 00:05:24.115 "bdev_nvme_opal_init", 00:05:24.115 "bdev_nvme_send_cmd", 00:05:24.115 "bdev_nvme_get_path_iostat", 00:05:24.115 "bdev_nvme_get_mdns_discovery_info", 00:05:24.115 "bdev_nvme_stop_mdns_discovery", 00:05:24.115 "bdev_nvme_start_mdns_discovery", 00:05:24.115 "bdev_nvme_set_multipath_policy", 00:05:24.115 "bdev_nvme_set_preferred_path", 00:05:24.115 "bdev_nvme_get_io_paths", 00:05:24.115 "bdev_nvme_remove_error_injection", 00:05:24.115 "bdev_nvme_add_error_injection", 00:05:24.115 "bdev_nvme_get_discovery_info", 00:05:24.115 "bdev_nvme_stop_discovery", 00:05:24.115 "bdev_nvme_start_discovery", 00:05:24.115 "bdev_nvme_get_controller_health_info", 00:05:24.115 "bdev_nvme_disable_controller", 00:05:24.115 "bdev_nvme_enable_controller", 00:05:24.115 "bdev_nvme_reset_controller", 00:05:24.115 "bdev_nvme_get_transport_statistics", 00:05:24.115 "bdev_nvme_apply_firmware", 00:05:24.115 "bdev_nvme_detach_controller", 00:05:24.115 "bdev_nvme_get_controllers", 00:05:24.115 "bdev_nvme_attach_controller", 00:05:24.115 "bdev_nvme_set_hotplug", 00:05:24.115 "bdev_nvme_set_options", 00:05:24.115 "bdev_passthru_delete", 00:05:24.115 "bdev_passthru_create", 00:05:24.115 "bdev_lvol_set_parent_bdev", 00:05:24.115 "bdev_lvol_set_parent", 00:05:24.115 "bdev_lvol_check_shallow_copy", 00:05:24.115 "bdev_lvol_start_shallow_copy", 00:05:24.115 "bdev_lvol_grow_lvstore", 00:05:24.115 "bdev_lvol_get_lvols", 00:05:24.115 "bdev_lvol_get_lvstores", 00:05:24.115 "bdev_lvol_delete", 00:05:24.115 "bdev_lvol_set_read_only", 00:05:24.115 "bdev_lvol_resize", 00:05:24.115 "bdev_lvol_decouple_parent", 00:05:24.115 "bdev_lvol_inflate", 00:05:24.115 "bdev_lvol_rename", 00:05:24.115 "bdev_lvol_clone_bdev", 00:05:24.115 "bdev_lvol_clone", 00:05:24.115 "bdev_lvol_snapshot", 00:05:24.115 "bdev_lvol_create", 00:05:24.115 "bdev_lvol_delete_lvstore", 00:05:24.115 "bdev_lvol_rename_lvstore", 00:05:24.115 "bdev_lvol_create_lvstore", 00:05:24.115 "bdev_raid_set_options", 00:05:24.115 "bdev_raid_remove_base_bdev", 00:05:24.115 "bdev_raid_add_base_bdev", 00:05:24.115 "bdev_raid_delete", 00:05:24.115 "bdev_raid_create", 00:05:24.115 "bdev_raid_get_bdevs", 00:05:24.115 "bdev_error_inject_error", 00:05:24.115 "bdev_error_delete", 00:05:24.115 "bdev_error_create", 00:05:24.115 "bdev_split_delete", 00:05:24.115 "bdev_split_create", 00:05:24.115 "bdev_delay_delete", 00:05:24.115 "bdev_delay_create", 00:05:24.116 "bdev_delay_update_latency", 00:05:24.116 "bdev_zone_block_delete", 00:05:24.116 "bdev_zone_block_create", 00:05:24.116 "blobfs_create", 00:05:24.116 "blobfs_detect", 00:05:24.116 "blobfs_set_cache_size", 00:05:24.116 "bdev_aio_delete", 00:05:24.116 "bdev_aio_rescan", 00:05:24.116 "bdev_aio_create", 00:05:24.116 "bdev_ftl_set_property", 00:05:24.116 "bdev_ftl_get_properties", 00:05:24.116 "bdev_ftl_get_stats", 00:05:24.116 "bdev_ftl_unmap", 00:05:24.116 "bdev_ftl_unload", 00:05:24.116 "bdev_ftl_delete", 00:05:24.116 "bdev_ftl_load", 00:05:24.116 "bdev_ftl_create", 00:05:24.116 "bdev_virtio_attach_controller", 00:05:24.116 "bdev_virtio_scsi_get_devices", 00:05:24.116 "bdev_virtio_detach_controller", 00:05:24.116 "bdev_virtio_blk_set_hotplug", 00:05:24.116 "bdev_iscsi_delete", 00:05:24.116 "bdev_iscsi_create", 00:05:24.116 "bdev_iscsi_set_options", 00:05:24.116 "accel_error_inject_error", 00:05:24.116 "ioat_scan_accel_module", 00:05:24.116 "dsa_scan_accel_module", 00:05:24.116 "iaa_scan_accel_module", 00:05:24.116 "keyring_file_remove_key", 00:05:24.116 "keyring_file_add_key", 00:05:24.116 "keyring_linux_set_options", 00:05:24.116 "iscsi_get_histogram", 00:05:24.116 "iscsi_enable_histogram", 00:05:24.116 "iscsi_set_options", 00:05:24.116 "iscsi_get_auth_groups", 00:05:24.116 "iscsi_auth_group_remove_secret", 00:05:24.116 "iscsi_auth_group_add_secret", 00:05:24.116 "iscsi_delete_auth_group", 00:05:24.116 "iscsi_create_auth_group", 00:05:24.116 "iscsi_set_discovery_auth", 00:05:24.116 "iscsi_get_options", 00:05:24.116 "iscsi_target_node_request_logout", 00:05:24.116 "iscsi_target_node_set_redirect", 00:05:24.116 "iscsi_target_node_set_auth", 00:05:24.116 "iscsi_target_node_add_lun", 00:05:24.116 "iscsi_get_stats", 00:05:24.116 "iscsi_get_connections", 00:05:24.116 "iscsi_portal_group_set_auth", 00:05:24.116 "iscsi_start_portal_group", 00:05:24.116 "iscsi_delete_portal_group", 00:05:24.116 "iscsi_create_portal_group", 00:05:24.116 "iscsi_get_portal_groups", 00:05:24.116 "iscsi_delete_target_node", 00:05:24.116 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.116 "iscsi_target_node_add_pg_ig_maps", 00:05:24.116 "iscsi_create_target_node", 00:05:24.116 "iscsi_get_target_nodes", 00:05:24.116 "iscsi_delete_initiator_group", 00:05:24.116 "iscsi_initiator_group_remove_initiators", 00:05:24.116 "iscsi_initiator_group_add_initiators", 00:05:24.116 "iscsi_create_initiator_group", 00:05:24.116 "iscsi_get_initiator_groups", 00:05:24.116 "nvmf_set_crdt", 00:05:24.116 "nvmf_set_config", 00:05:24.116 "nvmf_set_max_subsystems", 00:05:24.116 "nvmf_stop_mdns_prr", 00:05:24.116 "nvmf_publish_mdns_prr", 00:05:24.116 "nvmf_subsystem_get_listeners", 00:05:24.116 "nvmf_subsystem_get_qpairs", 00:05:24.116 "nvmf_subsystem_get_controllers", 00:05:24.116 "nvmf_get_stats", 00:05:24.116 "nvmf_get_transports", 00:05:24.116 "nvmf_create_transport", 00:05:24.116 "nvmf_get_targets", 00:05:24.116 "nvmf_delete_target", 00:05:24.116 "nvmf_create_target", 00:05:24.116 "nvmf_subsystem_allow_any_host", 00:05:24.116 "nvmf_subsystem_remove_host", 00:05:24.116 "nvmf_subsystem_add_host", 00:05:24.116 "nvmf_ns_remove_host", 00:05:24.116 "nvmf_ns_add_host", 00:05:24.116 "nvmf_subsystem_remove_ns", 00:05:24.116 "nvmf_subsystem_add_ns", 00:05:24.116 "nvmf_subsystem_listener_set_ana_state", 00:05:24.116 "nvmf_discovery_get_referrals", 00:05:24.116 "nvmf_discovery_remove_referral", 00:05:24.116 "nvmf_discovery_add_referral", 00:05:24.116 "nvmf_subsystem_remove_listener", 00:05:24.116 "nvmf_subsystem_add_listener", 00:05:24.116 "nvmf_delete_subsystem", 00:05:24.116 "nvmf_create_subsystem", 00:05:24.116 "nvmf_get_subsystems", 00:05:24.116 "env_dpdk_get_mem_stats", 00:05:24.116 "nbd_get_disks", 00:05:24.116 "nbd_stop_disk", 00:05:24.116 "nbd_start_disk", 00:05:24.116 "ublk_recover_disk", 00:05:24.116 "ublk_get_disks", 00:05:24.116 "ublk_stop_disk", 00:05:24.116 "ublk_start_disk", 00:05:24.116 "ublk_destroy_target", 00:05:24.116 "ublk_create_target", 00:05:24.116 "virtio_blk_create_transport", 00:05:24.116 "virtio_blk_get_transports", 00:05:24.116 "vhost_controller_set_coalescing", 00:05:24.116 "vhost_get_controllers", 00:05:24.116 "vhost_delete_controller", 00:05:24.116 "vhost_create_blk_controller", 00:05:24.116 "vhost_scsi_controller_remove_target", 00:05:24.116 "vhost_scsi_controller_add_target", 00:05:24.116 "vhost_start_scsi_controller", 00:05:24.116 "vhost_create_scsi_controller", 00:05:24.116 "thread_set_cpumask", 00:05:24.116 "framework_get_governor", 00:05:24.116 "framework_get_scheduler", 00:05:24.116 "framework_set_scheduler", 00:05:24.116 "framework_get_reactors", 00:05:24.116 "thread_get_io_channels", 00:05:24.116 "thread_get_pollers", 00:05:24.116 "thread_get_stats", 00:05:24.116 "framework_monitor_context_switch", 00:05:24.116 "spdk_kill_instance", 00:05:24.116 "log_enable_timestamps", 00:05:24.116 "log_get_flags", 00:05:24.116 "log_clear_flag", 00:05:24.116 "log_set_flag", 00:05:24.116 "log_get_level", 00:05:24.116 "log_set_level", 00:05:24.116 "log_get_print_level", 00:05:24.116 "log_set_print_level", 00:05:24.116 "framework_enable_cpumask_locks", 00:05:24.116 "framework_disable_cpumask_locks", 00:05:24.116 "framework_wait_init", 00:05:24.116 "framework_start_init", 00:05:24.116 "scsi_get_devices", 00:05:24.116 "bdev_get_histogram", 00:05:24.116 "bdev_enable_histogram", 00:05:24.116 "bdev_set_qos_limit", 00:05:24.116 "bdev_set_qd_sampling_period", 00:05:24.116 "bdev_get_bdevs", 00:05:24.116 "bdev_reset_iostat", 00:05:24.116 "bdev_get_iostat", 00:05:24.116 "bdev_examine", 00:05:24.116 "bdev_wait_for_examine", 00:05:24.116 "bdev_set_options", 00:05:24.116 "notify_get_notifications", 00:05:24.116 "notify_get_types", 00:05:24.116 "accel_get_stats", 00:05:24.116 "accel_set_options", 00:05:24.116 "accel_set_driver", 00:05:24.116 "accel_crypto_key_destroy", 00:05:24.116 "accel_crypto_keys_get", 00:05:24.116 "accel_crypto_key_create", 00:05:24.116 "accel_assign_opc", 00:05:24.116 "accel_get_module_info", 00:05:24.116 "accel_get_opc_assignments", 00:05:24.116 "vmd_rescan", 00:05:24.116 "vmd_remove_device", 00:05:24.116 "vmd_enable", 00:05:24.116 "sock_get_default_impl", 00:05:24.116 "sock_set_default_impl", 00:05:24.116 "sock_impl_set_options", 00:05:24.116 "sock_impl_get_options", 00:05:24.116 "iobuf_get_stats", 00:05:24.116 "iobuf_set_options", 00:05:24.116 "framework_get_pci_devices", 00:05:24.116 "framework_get_config", 00:05:24.116 "framework_get_subsystems", 00:05:24.116 "trace_get_info", 00:05:24.116 "trace_get_tpoint_group_mask", 00:05:24.116 "trace_disable_tpoint_group", 00:05:24.116 "trace_enable_tpoint_group", 00:05:24.116 "trace_clear_tpoint_mask", 00:05:24.116 "trace_set_tpoint_mask", 00:05:24.116 "keyring_get_keys", 00:05:24.116 "spdk_get_version", 00:05:24.116 "rpc_get_methods" 00:05:24.116 ] 00:05:24.116 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.116 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.116 04:54:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 559433 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 559433 ']' 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 559433 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 559433 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 559433' 00:05:24.116 killing process with pid 559433 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 559433 00:05:24.116 04:54:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 559433 00:05:26.642 00:05:26.642 real 0m4.103s 00:05:26.642 user 0m7.237s 00:05:26.642 sys 0m0.695s 00:05:26.642 04:54:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.642 04:54:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.642 ************************************ 00:05:26.642 END TEST spdkcli_tcp 00:05:26.642 ************************************ 00:05:26.642 04:54:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.642 04:54:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.642 04:54:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.642 04:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.642 04:54:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.642 ************************************ 00:05:26.642 START TEST dpdk_mem_utility 00:05:26.642 ************************************ 00:05:26.642 04:54:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.642 * Looking for test storage... 00:05:26.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.642 04:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.642 04:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=560020 00:05:26.642 04:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.642 04:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 560020 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 560020 ']' 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.642 04:54:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.642 [2024-07-13 04:54:33.093611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:26.642 [2024-07-13 04:54:33.093770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560020 ] 00:05:26.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.899 [2024-07-13 04:54:33.224478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.157 [2024-07-13 04:54:33.484646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.091 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.091 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:28.091 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.091 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.091 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.091 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.091 { 00:05:28.091 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.091 } 00:05:28.091 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.091 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.091 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:28.091 1 heaps totaling size 820.000000 MiB 00:05:28.091 size: 820.000000 MiB heap id: 0 00:05:28.091 end heaps---------- 00:05:28.091 8 mempools totaling size 598.116089 MiB 00:05:28.091 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.091 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.091 size: 84.521057 MiB name: bdev_io_560020 00:05:28.091 size: 51.011292 MiB name: evtpool_560020 00:05:28.091 size: 50.003479 MiB name: msgpool_560020 00:05:28.091 size: 21.763794 MiB name: PDU_Pool 00:05:28.091 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.091 size: 0.026123 MiB name: Session_Pool 00:05:28.091 end mempools------- 00:05:28.091 6 memzones totaling size 4.142822 MiB 00:05:28.091 size: 1.000366 MiB name: RG_ring_0_560020 00:05:28.091 size: 1.000366 MiB name: RG_ring_1_560020 00:05:28.091 size: 1.000366 MiB name: RG_ring_4_560020 00:05:28.091 size: 1.000366 MiB name: RG_ring_5_560020 00:05:28.091 size: 0.125366 MiB name: RG_ring_2_560020 00:05:28.091 size: 0.015991 MiB name: RG_ring_3_560020 00:05:28.091 end memzones------- 00:05:28.091 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.091 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:28.091 list of free elements. size: 18.514832 MiB 00:05:28.091 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:28.091 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:28.091 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:28.091 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:28.091 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:28.091 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:28.091 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:28.091 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:28.091 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:28.091 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:28.091 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:28.091 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:28.091 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:28.091 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:28.091 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:28.091 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:28.091 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:28.091 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:28.091 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:28.091 list of standard malloc elements. size: 199.220764 MiB 00:05:28.091 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:28.091 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:28.091 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:28.091 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:28.091 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:28.091 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:28.091 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:28.091 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:28.091 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:28.091 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:28.091 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:28.091 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:28.091 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:28.091 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:28.091 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:28.091 list of memzone associated elements. size: 602.264404 MiB 00:05:28.091 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:28.092 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.092 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:28.092 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.092 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:28.092 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_560020_0 00:05:28.092 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:28.092 associated memzone info: size: 48.002930 MiB name: MP_evtpool_560020_0 00:05:28.092 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:28.092 associated memzone info: size: 48.002930 MiB name: MP_msgpool_560020_0 00:05:28.092 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:28.092 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.092 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:28.092 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.092 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:28.092 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_560020 00:05:28.092 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:28.092 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_560020 00:05:28.092 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:28.092 associated memzone info: size: 1.007996 MiB name: MP_evtpool_560020 00:05:28.092 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:28.092 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.092 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:28.092 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.092 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:28.092 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.092 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:28.092 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.092 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:28.092 associated memzone info: size: 1.000366 MiB name: RG_ring_0_560020 00:05:28.092 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:28.092 associated memzone info: size: 1.000366 MiB name: RG_ring_1_560020 00:05:28.092 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:28.092 associated memzone info: size: 1.000366 MiB name: RG_ring_4_560020 00:05:28.092 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:28.092 associated memzone info: size: 1.000366 MiB name: RG_ring_5_560020 00:05:28.092 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:28.092 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_560020 00:05:28.092 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:28.092 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.092 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:28.092 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.092 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:28.092 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.092 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:28.092 associated memzone info: size: 0.125366 MiB name: RG_ring_2_560020 00:05:28.092 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:28.092 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.092 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:28.092 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.092 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:28.092 associated memzone info: size: 0.015991 MiB name: RG_ring_3_560020 00:05:28.092 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:28.092 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.092 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:28.092 associated memzone info: size: 0.000183 MiB name: MP_msgpool_560020 00:05:28.092 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:28.092 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_560020 00:05:28.092 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:28.092 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.092 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.092 04:54:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 560020 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 560020 ']' 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 560020 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 560020 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 560020' 00:05:28.092 killing process with pid 560020 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 560020 00:05:28.092 04:54:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 560020 00:05:30.618 00:05:30.618 real 0m4.088s 00:05:30.618 user 0m4.092s 00:05:30.618 sys 0m0.596s 00:05:30.618 04:54:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.618 04:54:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.618 ************************************ 00:05:30.618 END TEST dpdk_mem_utility 00:05:30.618 ************************************ 00:05:30.618 04:54:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.618 04:54:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.618 04:54:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.618 04:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.618 04:54:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.618 ************************************ 00:05:30.618 START TEST event 00:05:30.618 ************************************ 00:05:30.618 04:54:37 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.876 * Looking for test storage... 00:05:30.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.876 04:54:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:30.876 04:54:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.876 04:54:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.876 04:54:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:30.876 04:54:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.876 04:54:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.876 ************************************ 00:05:30.876 START TEST event_perf 00:05:30.876 ************************************ 00:05:30.876 04:54:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.876 Running I/O for 1 seconds...[2024-07-13 04:54:37.194732] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:30.876 [2024-07-13 04:54:37.194838] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560609 ] 00:05:30.876 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.876 [2024-07-13 04:54:37.317275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.133 [2024-07-13 04:54:37.575313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.133 [2024-07-13 04:54:37.575367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.133 [2024-07-13 04:54:37.575416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.133 [2024-07-13 04:54:37.575427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.507 Running I/O for 1 seconds... 00:05:32.507 lcore 0: 191278 00:05:32.507 lcore 1: 191278 00:05:32.507 lcore 2: 191277 00:05:32.507 lcore 3: 191278 00:05:32.767 done. 00:05:32.767 00:05:32.767 real 0m1.881s 00:05:32.767 user 0m4.695s 00:05:32.767 sys 0m0.170s 00:05:32.767 04:54:39 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.767 04:54:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.767 ************************************ 00:05:32.767 END TEST event_perf 00:05:32.767 ************************************ 00:05:32.767 04:54:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.767 04:54:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.767 04:54:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:32.767 04:54:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.767 04:54:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.767 ************************************ 00:05:32.767 START TEST event_reactor 00:05:32.767 ************************************ 00:05:32.767 04:54:39 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.767 [2024-07-13 04:54:39.135336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:32.767 [2024-07-13 04:54:39.135471] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560805 ] 00:05:32.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.024 [2024-07-13 04:54:39.283656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.282 [2024-07-13 04:54:39.544398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.655 test_start 00:05:34.655 oneshot 00:05:34.655 tick 100 00:05:34.655 tick 100 00:05:34.655 tick 250 00:05:34.655 tick 100 00:05:34.655 tick 100 00:05:34.655 tick 100 00:05:34.655 tick 250 00:05:34.655 tick 500 00:05:34.655 tick 100 00:05:34.655 tick 100 00:05:34.655 tick 250 00:05:34.655 tick 100 00:05:34.655 tick 100 00:05:34.655 test_end 00:05:34.655 00:05:34.655 real 0m1.906s 00:05:34.655 user 0m1.734s 00:05:34.655 sys 0m0.162s 00:05:34.655 04:54:41 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.655 04:54:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.655 ************************************ 00:05:34.655 END TEST event_reactor 00:05:34.655 ************************************ 00:05:34.655 04:54:41 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.655 04:54:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.655 04:54:41 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.655 04:54:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.655 04:54:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.655 ************************************ 00:05:34.656 START TEST event_reactor_perf 00:05:34.656 ************************************ 00:05:34.656 04:54:41 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.656 [2024-07-13 04:54:41.092892] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:34.656 [2024-07-13 04:54:41.093040] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561058 ] 00:05:34.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.914 [2024-07-13 04:54:41.241216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.172 [2024-07-13 04:54:41.504387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.546 test_start 00:05:36.546 test_end 00:05:36.546 Performance: 268002 events per second 00:05:36.546 00:05:36.546 real 0m1.910s 00:05:36.546 user 0m1.733s 00:05:36.546 sys 0m0.166s 00:05:36.546 04:54:42 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.546 04:54:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.546 ************************************ 00:05:36.546 END TEST event_reactor_perf 00:05:36.546 ************************************ 00:05:36.546 04:54:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.546 04:54:42 event -- event/event.sh@49 -- # uname -s 00:05:36.546 04:54:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.546 04:54:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.546 04:54:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.546 04:54:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.546 04:54:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.546 ************************************ 00:05:36.546 START TEST event_scheduler 00:05:36.547 ************************************ 00:05:36.547 04:54:43 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.804 * Looking for test storage... 00:05:36.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.804 04:54:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.805 04:54:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=561370 00:05:36.805 04:54:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.805 04:54:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.805 04:54:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 561370 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 561370 ']' 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.805 04:54:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.805 [2024-07-13 04:54:43.151152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:36.805 [2024-07-13 04:54:43.151314] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561370 ] 00:05:36.805 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.805 [2024-07-13 04:54:43.275323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.062 [2024-07-13 04:54:43.493822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.062 [2024-07-13 04:54:43.493902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.062 [2024-07-13 04:54:43.493993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.062 [2024-07-13 04:54:43.493998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:37.628 04:54:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.628 [2024-07-13 04:54:44.092645] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:37.628 [2024-07-13 04:54:44.092691] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.628 [2024-07-13 04:54:44.092723] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.628 [2024-07-13 04:54:44.092745] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.628 [2024-07-13 04:54:44.092771] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.628 04:54:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.628 04:54:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.192 [2024-07-13 04:54:44.393722] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.192 04:54:44 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.192 04:54:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.192 04:54:44 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.192 04:54:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.192 04:54:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.192 ************************************ 00:05:38.192 START TEST scheduler_create_thread 00:05:38.192 ************************************ 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.192 2 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:38.192 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 3 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 4 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 5 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 6 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 7 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 8 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 9 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 10 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.193 04:54:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.758 04:54:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.758 00:05:38.758 real 0m0.598s 00:05:38.758 user 0m0.012s 00:05:38.758 sys 0m0.002s 00:05:38.758 04:54:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.758 04:54:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.758 ************************************ 00:05:38.758 END TEST scheduler_create_thread 00:05:38.758 ************************************ 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:38.758 04:54:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.758 04:54:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 561370 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 561370 ']' 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 561370 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 561370 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 561370' 00:05:38.758 killing process with pid 561370 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 561370 00:05:38.758 04:54:45 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 561370 00:05:39.015 [2024-07-13 04:54:45.502413] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.386 00:05:40.386 real 0m3.630s 00:05:40.386 user 0m7.081s 00:05:40.386 sys 0m0.449s 00:05:40.386 04:54:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.386 04:54:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.386 ************************************ 00:05:40.386 END TEST event_scheduler 00:05:40.386 ************************************ 00:05:40.386 04:54:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.386 04:54:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.386 04:54:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.386 04:54:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.386 04:54:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.386 04:54:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.386 ************************************ 00:05:40.386 START TEST app_repeat 00:05:40.386 ************************************ 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=561826 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 561826' 00:05:40.386 Process app_repeat pid: 561826 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.386 spdk_app_start Round 0 00:05:40.386 04:54:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 561826 /var/tmp/spdk-nbd.sock 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 561826 ']' 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.386 04:54:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.386 [2024-07-13 04:54:46.759099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:40.386 [2024-07-13 04:54:46.759288] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561826 ] 00:05:40.386 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.642 [2024-07-13 04:54:46.891361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.900 [2024-07-13 04:54:47.151730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.900 [2024-07-13 04:54:47.151736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.466 04:54:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.466 04:54:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:41.466 04:54:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.723 Malloc0 00:05:41.723 04:54:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.981 Malloc1 00:05:41.981 04:54:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.981 04:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.545 /dev/nbd0 00:05:42.545 04:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.545 04:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.545 1+0 records in 00:05:42.545 1+0 records out 00:05:42.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387772 s, 10.6 MB/s 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.545 04:54:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.545 04:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.545 04:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.545 04:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.803 /dev/nbd1 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.803 1+0 records in 00:05:42.803 1+0 records out 00:05:42.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227711 s, 18.0 MB/s 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.803 04:54:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.803 04:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.061 { 00:05:43.061 "nbd_device": "/dev/nbd0", 00:05:43.061 "bdev_name": "Malloc0" 00:05:43.061 }, 00:05:43.061 { 00:05:43.061 "nbd_device": "/dev/nbd1", 00:05:43.061 "bdev_name": "Malloc1" 00:05:43.061 } 00:05:43.061 ]' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.061 { 00:05:43.061 "nbd_device": "/dev/nbd0", 00:05:43.061 "bdev_name": "Malloc0" 00:05:43.061 }, 00:05:43.061 { 00:05:43.061 "nbd_device": "/dev/nbd1", 00:05:43.061 "bdev_name": "Malloc1" 00:05:43.061 } 00:05:43.061 ]' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.061 /dev/nbd1' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.061 /dev/nbd1' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.061 256+0 records in 00:05:43.061 256+0 records out 00:05:43.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513966 s, 204 MB/s 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.061 256+0 records in 00:05:43.061 256+0 records out 00:05:43.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285278 s, 36.8 MB/s 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.061 256+0 records in 00:05:43.061 256+0 records out 00:05:43.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302331 s, 34.7 MB/s 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.061 04:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.626 04:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.626 04:54:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.884 04:54:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.884 04:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.884 04:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.142 04:54:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.142 04:54:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.399 04:54:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.779 [2024-07-13 04:54:52.224831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.050 [2024-07-13 04:54:52.481060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.050 [2024-07-13 04:54:52.481062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.308 [2024-07-13 04:54:52.704526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.308 [2024-07-13 04:54:52.704630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.678 04:54:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.678 04:54:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.678 spdk_app_start Round 1 00:05:47.678 04:54:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 561826 /var/tmp/spdk-nbd.sock 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 561826 ']' 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.678 04:54:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.678 04:54:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.678 04:54:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.678 04:54:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.936 Malloc0 00:05:48.194 04:54:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.452 Malloc1 00:05:48.452 04:54:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.452 04:54:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.712 /dev/nbd0 00:05:48.712 04:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.712 04:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.712 1+0 records in 00:05:48.712 1+0 records out 00:05:48.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196048 s, 20.9 MB/s 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.712 04:54:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.712 04:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.712 04:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.712 04:54:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.971 /dev/nbd1 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.971 1+0 records in 00:05:48.971 1+0 records out 00:05:48.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213878 s, 19.2 MB/s 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.971 04:54:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.971 04:54:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.228 { 00:05:49.228 "nbd_device": "/dev/nbd0", 00:05:49.228 "bdev_name": "Malloc0" 00:05:49.228 }, 00:05:49.228 { 00:05:49.228 "nbd_device": "/dev/nbd1", 00:05:49.228 "bdev_name": "Malloc1" 00:05:49.228 } 00:05:49.228 ]' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.228 { 00:05:49.228 "nbd_device": "/dev/nbd0", 00:05:49.228 "bdev_name": "Malloc0" 00:05:49.228 }, 00:05:49.228 { 00:05:49.228 "nbd_device": "/dev/nbd1", 00:05:49.228 "bdev_name": "Malloc1" 00:05:49.228 } 00:05:49.228 ]' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.228 /dev/nbd1' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.228 /dev/nbd1' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.228 256+0 records in 00:05:49.228 256+0 records out 00:05:49.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00374187 s, 280 MB/s 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.228 256+0 records in 00:05:49.228 256+0 records out 00:05:49.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024177 s, 43.4 MB/s 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.228 04:54:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.486 256+0 records in 00:05:49.486 256+0 records out 00:05:49.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284876 s, 36.8 MB/s 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.486 04:54:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.743 04:54:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.744 04:54:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.744 04:54:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.744 04:54:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.744 04:54:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.001 04:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.259 04:54:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.259 04:54:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.825 04:54:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.199 [2024-07-13 04:54:58.439219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.199 [2024-07-13 04:54:58.694234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.199 [2024-07-13 04:54:58.694235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.457 [2024-07-13 04:54:58.906787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.457 [2024-07-13 04:54:58.906897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.829 04:55:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.829 04:55:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.829 spdk_app_start Round 2 00:05:53.829 04:55:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 561826 /var/tmp/spdk-nbd.sock 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 561826 ']' 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.829 04:55:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:53.829 04:55:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.087 Malloc0 00:05:54.087 04:55:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.652 Malloc1 00:05:54.652 04:55:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.652 04:55:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.910 /dev/nbd0 00:05:54.910 04:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.910 04:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.910 1+0 records in 00:05:54.910 1+0 records out 00:05:54.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190144 s, 21.5 MB/s 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.910 04:55:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.910 04:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.910 04:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.910 04:55:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.167 /dev/nbd1 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.167 1+0 records in 00:05:55.167 1+0 records out 00:05:55.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255369 s, 16.0 MB/s 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.167 04:55:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.167 04:55:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.425 { 00:05:55.425 "nbd_device": "/dev/nbd0", 00:05:55.425 "bdev_name": "Malloc0" 00:05:55.425 }, 00:05:55.425 { 00:05:55.425 "nbd_device": "/dev/nbd1", 00:05:55.425 "bdev_name": "Malloc1" 00:05:55.425 } 00:05:55.425 ]' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.425 { 00:05:55.425 "nbd_device": "/dev/nbd0", 00:05:55.425 "bdev_name": "Malloc0" 00:05:55.425 }, 00:05:55.425 { 00:05:55.425 "nbd_device": "/dev/nbd1", 00:05:55.425 "bdev_name": "Malloc1" 00:05:55.425 } 00:05:55.425 ]' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.425 /dev/nbd1' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.425 /dev/nbd1' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.425 256+0 records in 00:05:55.425 256+0 records out 00:05:55.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493053 s, 213 MB/s 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.425 256+0 records in 00:05:55.425 256+0 records out 00:05:55.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245931 s, 42.6 MB/s 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.425 256+0 records in 00:05:55.425 256+0 records out 00:05:55.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028578 s, 36.7 MB/s 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.425 04:55:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.682 04:55:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.941 04:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.199 04:55:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.199 04:55:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.765 04:55:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.136 [2024-07-13 04:55:04.532036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.394 [2024-07-13 04:55:04.786418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.394 [2024-07-13 04:55:04.786419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.664 [2024-07-13 04:55:05.007951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.664 [2024-07-13 04:55:05.008042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.648 04:55:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 561826 /var/tmp/spdk-nbd.sock 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 561826 ']' 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.648 04:55:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.215 04:55:06 event.app_repeat -- event/event.sh@39 -- # killprocess 561826 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 561826 ']' 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 561826 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 561826 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 561826' 00:06:00.215 killing process with pid 561826 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 561826 00:06:00.215 04:55:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 561826 00:06:01.589 spdk_app_start is called in Round 0. 00:06:01.589 Shutdown signal received, stop current app iteration 00:06:01.589 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:01.590 spdk_app_start is called in Round 1. 00:06:01.590 Shutdown signal received, stop current app iteration 00:06:01.590 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:01.590 spdk_app_start is called in Round 2. 00:06:01.590 Shutdown signal received, stop current app iteration 00:06:01.590 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:01.590 spdk_app_start is called in Round 3. 00:06:01.590 Shutdown signal received, stop current app iteration 00:06:01.590 04:55:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:01.590 04:55:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:01.590 00:06:01.590 real 0m20.991s 00:06:01.590 user 0m43.185s 00:06:01.590 sys 0m3.575s 00:06:01.590 04:55:07 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.590 04:55:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.590 ************************************ 00:06:01.590 END TEST app_repeat 00:06:01.590 ************************************ 00:06:01.590 04:55:07 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.590 04:55:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:01.590 04:55:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:01.590 04:55:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.590 04:55:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.590 04:55:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.590 ************************************ 00:06:01.590 START TEST cpu_locks 00:06:01.590 ************************************ 00:06:01.590 04:55:07 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:01.590 * Looking for test storage... 00:06:01.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:01.590 04:55:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:01.590 04:55:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:01.590 04:55:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:01.590 04:55:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:01.590 04:55:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.590 04:55:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.590 04:55:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.590 ************************************ 00:06:01.590 START TEST default_locks 00:06:01.590 ************************************ 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=564565 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 564565 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 564565 ']' 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.590 04:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.590 [2024-07-13 04:55:07.919366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:01.590 [2024-07-13 04:55:07.919518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564565 ] 00:06:01.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.590 [2024-07-13 04:55:08.050901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.848 [2024-07-13 04:55:08.310557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.783 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.783 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:02.783 04:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 564565 00:06:02.783 04:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 564565 00:06:02.783 04:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.042 lslocks: write error 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 564565 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 564565 ']' 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 564565 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 564565 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 564565' 00:06:03.042 killing process with pid 564565 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 564565 00:06:03.042 04:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 564565 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 564565 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 564565 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 564565 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 564565 ']' 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (564565) - No such process 00:06:05.570 ERROR: process (pid: 564565) is no longer running 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.570 00:06:05.570 real 0m4.195s 00:06:05.570 user 0m4.217s 00:06:05.570 sys 0m0.743s 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.570 04:55:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.570 ************************************ 00:06:05.570 END TEST default_locks 00:06:05.570 ************************************ 00:06:05.570 04:55:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.570 04:55:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.570 04:55:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.570 04:55:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.570 04:55:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.829 ************************************ 00:06:05.829 START TEST default_locks_via_rpc 00:06:05.829 ************************************ 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=565125 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 565125 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 565125 ']' 00:06:05.829 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.830 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.830 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.830 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.830 04:55:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.830 [2024-07-13 04:55:12.174491] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:05.830 [2024-07-13 04:55:12.174662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565125 ] 00:06:05.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.830 [2024-07-13 04:55:12.309757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.088 [2024-07-13 04:55:12.571515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 565125 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 565125 00:06:07.020 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 565125 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 565125 ']' 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 565125 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 565125 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 565125' 00:06:07.586 killing process with pid 565125 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 565125 00:06:07.586 04:55:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 565125 00:06:10.116 00:06:10.116 real 0m4.292s 00:06:10.116 user 0m4.256s 00:06:10.116 sys 0m0.739s 00:06:10.116 04:55:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.116 04:55:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 ************************************ 00:06:10.116 END TEST default_locks_via_rpc 00:06:10.116 ************************************ 00:06:10.116 04:55:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.116 04:55:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.116 04:55:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.116 04:55:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.116 04:55:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 ************************************ 00:06:10.116 START TEST non_locking_app_on_locked_coremask 00:06:10.116 ************************************ 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=565606 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 565606 /var/tmp/spdk.sock 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 565606 ']' 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.116 04:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 [2024-07-13 04:55:16.513481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.116 [2024-07-13 04:55:16.513629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565606 ] 00:06:10.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.374 [2024-07-13 04:55:16.640444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.632 [2024-07-13 04:55:16.894745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=565827 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 565827 /var/tmp/spdk2.sock 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 565827 ']' 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.567 04:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.567 [2024-07-13 04:55:17.898581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.567 [2024-07-13 04:55:17.898743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565827 ] 00:06:11.567 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.826 [2024-07-13 04:55:18.102710] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.826 [2024-07-13 04:55:18.102780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.392 [2024-07-13 04:55:18.628525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.296 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.296 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.296 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 565606 00:06:14.296 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 565606 00:06:14.296 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.554 lslocks: write error 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 565606 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 565606 ']' 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 565606 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 565606 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 565606' 00:06:14.554 killing process with pid 565606 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 565606 00:06:14.554 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 565606 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 565827 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 565827 ']' 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 565827 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 565827 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 565827' 00:06:19.815 killing process with pid 565827 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 565827 00:06:19.815 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 565827 00:06:22.337 00:06:22.337 real 0m12.267s 00:06:22.337 user 0m12.666s 00:06:22.337 sys 0m1.481s 00:06:22.337 04:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.337 04:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.337 ************************************ 00:06:22.337 END TEST non_locking_app_on_locked_coremask 00:06:22.337 ************************************ 00:06:22.337 04:55:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.337 04:55:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.337 04:55:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.337 04:55:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.337 04:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.337 ************************************ 00:06:22.337 START TEST locking_app_on_unlocked_coremask 00:06:22.337 ************************************ 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=567062 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 567062 /var/tmp/spdk.sock 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 567062 ']' 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.337 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.338 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.338 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.338 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.596 [2024-07-13 04:55:28.845612] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.596 [2024-07-13 04:55:28.845755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567062 ] 00:06:22.596 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.596 [2024-07-13 04:55:28.980571] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.596 [2024-07-13 04:55:28.980645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.852 [2024-07-13 04:55:29.241972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=567325 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 567325 /var/tmp/spdk2.sock 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 567325 ']' 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.784 04:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.784 [2024-07-13 04:55:30.230454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.784 [2024-07-13 04:55:30.230613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567325 ] 00:06:24.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.042 [2024-07-13 04:55:30.418237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.608 [2024-07-13 04:55:30.939972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.520 04:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.520 04:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.520 04:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 567325 00:06:26.520 04:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 567325 00:06:26.520 04:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.085 lslocks: write error 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 567062 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 567062 ']' 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 567062 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 567062 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 567062' 00:06:27.085 killing process with pid 567062 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 567062 00:06:27.085 04:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 567062 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 567325 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 567325 ']' 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 567325 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 567325 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 567325' 00:06:32.353 killing process with pid 567325 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 567325 00:06:32.353 04:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 567325 00:06:34.887 00:06:34.887 real 0m12.458s 00:06:34.887 user 0m12.828s 00:06:34.887 sys 0m1.532s 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.887 ************************************ 00:06:34.887 END TEST locking_app_on_unlocked_coremask 00:06:34.887 ************************************ 00:06:34.887 04:55:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.887 04:55:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.887 04:55:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.887 04:55:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.887 04:55:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.887 ************************************ 00:06:34.887 START TEST locking_app_on_locked_coremask 00:06:34.887 ************************************ 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=568595 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 568595 /var/tmp/spdk.sock 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 568595 ']' 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.887 04:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.887 [2024-07-13 04:55:41.338921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:34.887 [2024-07-13 04:55:41.339062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568595 ] 00:06:35.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.145 [2024-07-13 04:55:41.465613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.402 [2024-07-13 04:55:41.717188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=568823 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 568823 /var/tmp/spdk2.sock 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 568823 /var/tmp/spdk2.sock 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 568823 /var/tmp/spdk2.sock 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 568823 ']' 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.336 04:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.336 [2024-07-13 04:55:42.708994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.336 [2024-07-13 04:55:42.709162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568823 ] 00:06:36.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.594 [2024-07-13 04:55:42.898343] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 568595 has claimed it. 00:06:36.594 [2024-07-13 04:55:42.898430] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (568823) - No such process 00:06:37.158 ERROR: process (pid: 568823) is no longer running 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 568595 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 568595 00:06:37.158 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.426 lslocks: write error 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 568595 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 568595 ']' 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 568595 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 568595 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 568595' 00:06:37.426 killing process with pid 568595 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 568595 00:06:37.426 04:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 568595 00:06:39.955 00:06:39.955 real 0m5.007s 00:06:39.955 user 0m5.223s 00:06:39.955 sys 0m0.956s 00:06:39.955 04:55:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.955 04:55:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.955 ************************************ 00:06:39.955 END TEST locking_app_on_locked_coremask 00:06:39.955 ************************************ 00:06:39.955 04:55:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.955 04:55:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.955 04:55:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.955 04:55:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.955 04:55:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.955 ************************************ 00:06:39.955 START TEST locking_overlapped_coremask 00:06:39.955 ************************************ 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=569262 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 569262 /var/tmp/spdk.sock 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 569262 ']' 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.955 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.956 04:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.956 [2024-07-13 04:55:46.403298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:39.956 [2024-07-13 04:55:46.403470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569262 ] 00:06:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.213 [2024-07-13 04:55:46.538377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.471 [2024-07-13 04:55:46.804515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.471 [2024-07-13 04:55:46.804568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.471 [2024-07-13 04:55:46.804559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=569401 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 569401 /var/tmp/spdk2.sock 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 569401 /var/tmp/spdk2.sock 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 569401 /var/tmp/spdk2.sock 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 569401 ']' 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.407 04:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.407 [2024-07-13 04:55:47.806595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:41.407 [2024-07-13 04:55:47.806757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569401 ] 00:06:41.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.665 [2024-07-13 04:55:47.995977] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 569262 has claimed it. 00:06:41.665 [2024-07-13 04:55:47.996072] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (569401) - No such process 00:06:42.230 ERROR: process (pid: 569401) is no longer running 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 569262 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 569262 ']' 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 569262 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 569262 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 569262' 00:06:42.230 killing process with pid 569262 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 569262 00:06:42.230 04:55:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 569262 00:06:44.761 00:06:44.761 real 0m4.753s 00:06:44.761 user 0m12.326s 00:06:44.761 sys 0m0.736s 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.761 ************************************ 00:06:44.761 END TEST locking_overlapped_coremask 00:06:44.761 ************************************ 00:06:44.761 04:55:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.761 04:55:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.761 04:55:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.761 04:55:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.761 04:55:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.761 ************************************ 00:06:44.761 START TEST locking_overlapped_coremask_via_rpc 00:06:44.761 ************************************ 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=569835 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 569835 /var/tmp/spdk.sock 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 569835 ']' 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.761 04:55:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.761 [2024-07-13 04:55:51.208361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:44.761 [2024-07-13 04:55:51.208526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569835 ] 00:06:45.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.020 [2024-07-13 04:55:51.342945] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.020 [2024-07-13 04:55:51.343017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.278 [2024-07-13 04:55:51.607149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.278 [2024-07-13 04:55:51.607203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.278 [2024-07-13 04:55:51.607208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=569975 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 569975 /var/tmp/spdk2.sock 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 569975 ']' 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.212 04:55:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.212 [2024-07-13 04:55:52.525194] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.212 [2024-07-13 04:55:52.525348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569975 ] 00:06:46.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.212 [2024-07-13 04:55:52.708095] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.212 [2024-07-13 04:55:52.708165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.777 [2024-07-13 04:55:53.170013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.777 [2024-07-13 04:55:53.170057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.777 [2024-07-13 04:55:53.170052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.307 [2024-07-13 04:55:55.223050] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 569835 has claimed it. 00:06:49.307 request: 00:06:49.307 { 00:06:49.307 "method": "framework_enable_cpumask_locks", 00:06:49.307 "req_id": 1 00:06:49.307 } 00:06:49.307 Got JSON-RPC error response 00:06:49.307 response: 00:06:49.307 { 00:06:49.307 "code": -32603, 00:06:49.307 "message": "Failed to claim CPU core: 2" 00:06:49.307 } 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 569835 /var/tmp/spdk.sock 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 569835 ']' 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 569975 /var/tmp/spdk2.sock 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 569975 ']' 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.307 00:06:49.307 real 0m4.605s 00:06:49.307 user 0m1.462s 00:06:49.307 sys 0m0.239s 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.307 04:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.307 ************************************ 00:06:49.307 END TEST locking_overlapped_coremask_via_rpc 00:06:49.307 ************************************ 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.307 04:55:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.307 04:55:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 569835 ]] 00:06:49.307 04:55:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 569835 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 569835 ']' 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 569835 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 569835 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 569835' 00:06:49.307 killing process with pid 569835 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 569835 00:06:49.307 04:55:55 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 569835 00:06:51.839 04:55:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 569975 ]] 00:06:51.839 04:55:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 569975 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 569975 ']' 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 569975 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 569975 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 569975' 00:06:51.839 killing process with pid 569975 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 569975 00:06:51.839 04:55:58 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 569975 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 569835 ]] 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 569835 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 569835 ']' 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 569835 00:06:54.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (569835) - No such process 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 569835 is not found' 00:06:54.368 Process with pid 569835 is not found 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 569975 ]] 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 569975 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 569975 ']' 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 569975 00:06:54.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (569975) - No such process 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 569975 is not found' 00:06:54.368 Process with pid 569975 is not found 00:06:54.368 04:56:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.368 00:06:54.368 real 0m52.600s 00:06:54.368 user 1m27.386s 00:06:54.368 sys 0m7.712s 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.368 04:56:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.368 ************************************ 00:06:54.368 END TEST cpu_locks 00:06:54.369 ************************************ 00:06:54.369 04:56:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.369 00:06:54.369 real 1m23.294s 00:06:54.369 user 2m25.976s 00:06:54.369 sys 0m12.470s 00:06:54.369 04:56:00 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.369 04:56:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 ************************************ 00:06:54.369 END TEST event 00:06:54.369 ************************************ 00:06:54.369 04:56:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.369 04:56:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:54.369 04:56:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.369 04:56:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.369 04:56:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 ************************************ 00:06:54.369 START TEST thread 00:06:54.369 ************************************ 00:06:54.369 04:56:00 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:54.369 * Looking for test storage... 00:06:54.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:54.369 04:56:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.369 04:56:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:54.369 04:56:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.369 04:56:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 ************************************ 00:06:54.369 START TEST thread_poller_perf 00:06:54.369 ************************************ 00:06:54.369 04:56:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.369 [2024-07-13 04:56:00.537181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.369 [2024-07-13 04:56:00.537320] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571012 ] 00:06:54.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.369 [2024-07-13 04:56:00.672551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.627 [2024-07-13 04:56:00.927958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.627 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.999 ====================================== 00:06:55.999 busy:2715910796 (cyc) 00:06:55.999 total_run_count: 282000 00:06:55.999 tsc_hz: 2700000000 (cyc) 00:06:55.999 ====================================== 00:06:56.000 poller_cost: 9630 (cyc), 3566 (nsec) 00:06:56.000 00:06:56.000 real 0m1.883s 00:06:56.000 user 0m1.721s 00:06:56.000 sys 0m0.153s 00:06:56.000 04:56:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.000 04:56:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.000 ************************************ 00:06:56.000 END TEST thread_poller_perf 00:06:56.000 ************************************ 00:06:56.000 04:56:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:56.000 04:56:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.000 04:56:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:56.000 04:56:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.000 04:56:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.000 ************************************ 00:06:56.000 START TEST thread_poller_perf 00:06:56.000 ************************************ 00:06:56.000 04:56:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.000 [2024-07-13 04:56:02.465012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.000 [2024-07-13 04:56:02.465131] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571288 ] 00:06:56.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.257 [2024-07-13 04:56:02.595099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.515 [2024-07-13 04:56:02.831328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.515 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.888 ====================================== 00:06:57.888 busy:2705056510 (cyc) 00:06:57.888 total_run_count: 3681000 00:06:57.888 tsc_hz: 2700000000 (cyc) 00:06:57.888 ====================================== 00:06:57.888 poller_cost: 734 (cyc), 271 (nsec) 00:06:57.888 00:06:57.888 real 0m1.824s 00:06:57.888 user 0m1.654s 00:06:57.888 sys 0m0.161s 00:06:57.888 04:56:04 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.888 04:56:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.888 ************************************ 00:06:57.888 END TEST thread_poller_perf 00:06:57.888 ************************************ 00:06:57.888 04:56:04 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:57.888 04:56:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.888 00:06:57.888 real 0m3.850s 00:06:57.888 user 0m3.427s 00:06:57.888 sys 0m0.416s 00:06:57.888 04:56:04 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.888 04:56:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.888 ************************************ 00:06:57.888 END TEST thread 00:06:57.888 ************************************ 00:06:57.888 04:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.888 04:56:04 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:57.888 04:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.888 04:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.888 04:56:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.888 ************************************ 00:06:57.888 START TEST accel 00:06:57.888 ************************************ 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:57.888 * Looking for test storage... 00:06:57.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:57.888 04:56:04 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:57.888 04:56:04 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:57.888 04:56:04 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:57.888 04:56:04 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=571608 00:06:57.888 04:56:04 accel -- accel/accel.sh@63 -- # waitforlisten 571608 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@829 -- # '[' -z 571608 ']' 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.888 04:56:04 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:57.888 04:56:04 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.888 04:56:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.888 04:56:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.888 04:56:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.888 04:56:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.888 04:56:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.888 04:56:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.888 04:56:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:57.888 04:56:04 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.146 [2024-07-13 04:56:04.462861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.146 [2024-07-13 04:56:04.463033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571608 ] 00:06:58.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.146 [2024-07-13 04:56:04.592890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.404 [2024-07-13 04:56:04.846668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.338 04:56:05 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.338 04:56:05 accel -- common/autotest_common.sh@862 -- # return 0 00:06:59.338 04:56:05 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:59.338 04:56:05 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:59.338 04:56:05 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:59.338 04:56:05 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:59.338 04:56:05 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:59.338 04:56:05 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:59.338 04:56:05 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.338 04:56:05 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:59.338 04:56:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.338 04:56:05 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.338 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.338 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.338 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.339 04:56:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.339 04:56:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.339 04:56:05 accel -- accel/accel.sh@75 -- # killprocess 571608 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@948 -- # '[' -z 571608 ']' 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@952 -- # kill -0 571608 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@953 -- # uname 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 571608 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 571608' 00:06:59.339 killing process with pid 571608 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@967 -- # kill 571608 00:06:59.339 04:56:05 accel -- common/autotest_common.sh@972 -- # wait 571608 00:07:01.862 04:56:08 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:01.862 04:56:08 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:01.862 04:56:08 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.862 04:56:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.862 04:56:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.862 04:56:08 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:01.862 04:56:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:02.121 04:56:08 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.121 04:56:08 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:02.121 04:56:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.121 04:56:08 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:02.121 04:56:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.121 04:56:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.121 04:56:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.121 ************************************ 00:07:02.121 START TEST accel_missing_filename 00:07:02.121 ************************************ 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.121 04:56:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.121 04:56:08 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.121 [2024-07-13 04:56:08.473579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.121 [2024-07-13 04:56:08.473692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572056 ] 00:07:02.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.121 [2024-07-13 04:56:08.606263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.379 [2024-07-13 04:56:08.860777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.636 [2024-07-13 04:56:09.094296] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.201 [2024-07-13 04:56:09.655406] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.766 A filename is required. 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.766 00:07:03.766 real 0m1.691s 00:07:03.766 user 0m1.480s 00:07:03.766 sys 0m0.239s 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.766 04:56:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 ************************************ 00:07:03.766 END TEST accel_missing_filename 00:07:03.766 ************************************ 00:07:03.766 04:56:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.766 04:56:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.766 04:56:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:03.766 04:56:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.766 04:56:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 ************************************ 00:07:03.766 START TEST accel_compress_verify 00:07:03.766 ************************************ 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.766 04:56:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.766 04:56:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.766 [2024-07-13 04:56:10.209270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.766 [2024-07-13 04:56:10.209387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572333 ] 00:07:04.024 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.024 [2024-07-13 04:56:10.342173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.281 [2024-07-13 04:56:10.604242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.538 [2024-07-13 04:56:10.840156] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.104 [2024-07-13 04:56:11.399377] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:05.364 00:07:05.364 Compression does not support the verify option, aborting. 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.364 00:07:05.364 real 0m1.687s 00:07:05.364 user 0m1.479s 00:07:05.364 sys 0m0.237s 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.364 04:56:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.364 ************************************ 00:07:05.364 END TEST accel_compress_verify 00:07:05.364 ************************************ 00:07:05.623 04:56:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.623 04:56:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:05.623 04:56:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.623 04:56:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.623 04:56:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.623 ************************************ 00:07:05.623 START TEST accel_wrong_workload 00:07:05.623 ************************************ 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.623 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:05.623 04:56:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:05.623 Unsupported workload type: foobar 00:07:05.623 [2024-07-13 04:56:11.945523] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:05.623 accel_perf options: 00:07:05.623 [-h help message] 00:07:05.623 [-q queue depth per core] 00:07:05.623 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.623 [-T number of threads per core 00:07:05.623 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.624 [-t time in seconds] 00:07:05.624 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.624 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:05.624 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.624 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.624 [-S for crc32c workload, use this seed value (default 0) 00:07:05.624 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.624 [-f for fill workload, use this BYTE value (default 255) 00:07:05.624 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.624 [-y verify result if this switch is on] 00:07:05.624 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.624 Can be used to spread operations across a wider range of memory. 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.624 00:07:05.624 real 0m0.060s 00:07:05.624 user 0m0.064s 00:07:05.624 sys 0m0.033s 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.624 04:56:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:05.624 ************************************ 00:07:05.624 END TEST accel_wrong_workload 00:07:05.624 ************************************ 00:07:05.624 04:56:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.624 04:56:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.624 04:56:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:05.624 04:56:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.624 04:56:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.624 ************************************ 00:07:05.624 START TEST accel_negative_buffers 00:07:05.624 ************************************ 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:05.624 04:56:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:05.624 -x option must be non-negative. 00:07:05.624 [2024-07-13 04:56:12.048134] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:05.624 accel_perf options: 00:07:05.624 [-h help message] 00:07:05.624 [-q queue depth per core] 00:07:05.624 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.624 [-T number of threads per core 00:07:05.624 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.624 [-t time in seconds] 00:07:05.624 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.624 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:05.624 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.624 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.624 [-S for crc32c workload, use this seed value (default 0) 00:07:05.624 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.624 [-f for fill workload, use this BYTE value (default 255) 00:07:05.624 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.624 [-y verify result if this switch is on] 00:07:05.624 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.624 Can be used to spread operations across a wider range of memory. 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.624 00:07:05.624 real 0m0.060s 00:07:05.624 user 0m0.060s 00:07:05.624 sys 0m0.035s 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.624 04:56:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:05.624 ************************************ 00:07:05.624 END TEST accel_negative_buffers 00:07:05.624 ************************************ 00:07:05.624 04:56:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.624 04:56:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:05.624 04:56:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.624 04:56:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.624 04:56:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.624 ************************************ 00:07:05.624 START TEST accel_crc32c 00:07:05.624 ************************************ 00:07:05.624 04:56:12 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:05.624 04:56:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:05.882 [2024-07-13 04:56:12.151038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.882 [2024-07-13 04:56:12.151179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572651 ] 00:07:05.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.882 [2024-07-13 04:56:12.279748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.141 [2024-07-13 04:56:12.547661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.399 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.400 04:56:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:08.928 04:56:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.928 00:07:08.928 real 0m2.702s 00:07:08.928 user 0m2.458s 00:07:08.928 sys 0m0.242s 00:07:08.928 04:56:14 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.928 04:56:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:08.928 ************************************ 00:07:08.928 END TEST accel_crc32c 00:07:08.928 ************************************ 00:07:08.928 04:56:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.928 04:56:14 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:08.928 04:56:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.928 04:56:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.928 04:56:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.928 ************************************ 00:07:08.928 START TEST accel_crc32c_C2 00:07:08.928 ************************************ 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.928 04:56:14 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:08.928 [2024-07-13 04:56:14.897376] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.928 [2024-07-13 04:56:14.897500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572950 ] 00:07:08.928 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.928 [2024-07-13 04:56:15.025137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.928 [2024-07-13 04:56:15.288377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.186 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.187 04:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.088 00:07:11.088 real 0m2.688s 00:07:11.088 user 0m2.451s 00:07:11.088 sys 0m0.234s 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.088 04:56:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:11.088 ************************************ 00:07:11.088 END TEST accel_crc32c_C2 00:07:11.088 ************************************ 00:07:11.088 04:56:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.088 04:56:17 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:11.088 04:56:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.088 04:56:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.088 04:56:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.345 ************************************ 00:07:11.345 START TEST accel_copy 00:07:11.345 ************************************ 00:07:11.345 04:56:17 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.345 04:56:17 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.345 [2024-07-13 04:56:17.629446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.345 [2024-07-13 04:56:17.629569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573295 ] 00:07:11.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.345 [2024-07-13 04:56:17.759097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.602 [2024-07-13 04:56:18.021994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.860 04:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:14.387 04:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.387 00:07:14.387 real 0m2.697s 00:07:14.387 user 0m2.448s 00:07:14.387 sys 0m0.245s 00:07:14.387 04:56:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.387 04:56:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.387 ************************************ 00:07:14.387 END TEST accel_copy 00:07:14.387 ************************************ 00:07:14.387 04:56:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.387 04:56:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.387 04:56:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:14.387 04:56:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.387 04:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.387 ************************************ 00:07:14.387 START TEST accel_fill 00:07:14.387 ************************************ 00:07:14.387 04:56:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:14.387 04:56:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:14.387 [2024-07-13 04:56:20.375541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.387 [2024-07-13 04:56:20.375665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573643 ] 00:07:14.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.387 [2024-07-13 04:56:20.515070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.387 [2024-07-13 04:56:20.775328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 04:56:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:16.552 04:56:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.552 00:07:16.552 real 0m2.703s 00:07:16.552 user 0m0.011s 00:07:16.552 sys 0m0.002s 00:07:16.552 04:56:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.552 04:56:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:16.552 ************************************ 00:07:16.552 END TEST accel_fill 00:07:16.552 ************************************ 00:07:16.813 04:56:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.813 04:56:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:16.813 04:56:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.813 04:56:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.813 04:56:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 ************************************ 00:07:16.813 START TEST accel_copy_crc32c 00:07:16.813 ************************************ 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:16.813 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:16.813 [2024-07-13 04:56:23.131761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:16.813 [2024-07-13 04:56:23.131910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573939 ] 00:07:16.813 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.813 [2024-07-13 04:56:23.266873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.071 [2024-07-13 04:56:23.538304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.330 04:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:19.859 04:56:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.859 00:07:19.860 real 0m2.715s 00:07:19.860 user 0m2.471s 00:07:19.860 sys 0m0.241s 00:07:19.860 04:56:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.860 04:56:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 ************************************ 00:07:19.860 END TEST accel_copy_crc32c 00:07:19.860 ************************************ 00:07:19.860 04:56:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.860 04:56:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.860 04:56:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.860 04:56:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.860 04:56:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 ************************************ 00:07:19.860 START TEST accel_copy_crc32c_C2 00:07:19.860 ************************************ 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.860 04:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:19.860 [2024-07-13 04:56:25.889777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.860 [2024-07-13 04:56:25.889959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574349 ] 00:07:19.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.860 [2024-07-13 04:56:26.015283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.860 [2024-07-13 04:56:26.276483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:20.118 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:20.119 04:56:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.052 00:07:22.052 real 0m2.688s 00:07:22.052 user 0m2.449s 00:07:22.052 sys 0m0.236s 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.052 04:56:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:22.052 ************************************ 00:07:22.052 END TEST accel_copy_crc32c_C2 00:07:22.052 ************************************ 00:07:22.310 04:56:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.311 04:56:28 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:22.311 04:56:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.311 04:56:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.311 04:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 ************************************ 00:07:22.311 START TEST accel_dualcast 00:07:22.311 ************************************ 00:07:22.311 04:56:28 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:22.311 04:56:28 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:22.311 [2024-07-13 04:56:28.620834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.311 [2024-07-13 04:56:28.620995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574646 ] 00:07:22.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.311 [2024-07-13 04:56:28.750012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.569 [2024-07-13 04:56:29.011377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.827 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.828 04:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:25.354 04:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.354 00:07:25.354 real 0m2.695s 00:07:25.354 user 0m0.009s 00:07:25.354 sys 0m0.003s 00:07:25.354 04:56:31 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.354 04:56:31 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:25.354 ************************************ 00:07:25.354 END TEST accel_dualcast 00:07:25.354 ************************************ 00:07:25.354 04:56:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.354 04:56:31 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:25.354 04:56:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.354 04:56:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.354 04:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.354 ************************************ 00:07:25.354 START TEST accel_compare 00:07:25.354 ************************************ 00:07:25.354 04:56:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:25.354 04:56:31 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:25.354 [2024-07-13 04:56:31.361879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.354 [2024-07-13 04:56:31.362023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575045 ] 00:07:25.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.354 [2024-07-13 04:56:31.490928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.354 [2024-07-13 04:56:31.751828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.613 04:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.509 04:56:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.767 04:56:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.767 04:56:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:27.767 04:56:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.767 00:07:27.767 real 0m2.695s 00:07:27.767 user 0m2.458s 00:07:27.767 sys 0m0.234s 00:07:27.767 04:56:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.767 04:56:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:27.767 ************************************ 00:07:27.767 END TEST accel_compare 00:07:27.767 ************************************ 00:07:27.767 04:56:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.767 04:56:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:27.767 04:56:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.767 04:56:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.767 04:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.767 ************************************ 00:07:27.767 START TEST accel_xor 00:07:27.767 ************************************ 00:07:27.767 04:56:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:27.767 04:56:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:27.767 [2024-07-13 04:56:34.100828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.767 [2024-07-13 04:56:34.100988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575338 ] 00:07:27.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.767 [2024-07-13 04:56:34.229714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.025 [2024-07-13 04:56:34.491468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.283 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.284 04:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.812 00:07:30.812 real 0m2.687s 00:07:30.812 user 0m2.456s 00:07:30.812 sys 0m0.227s 00:07:30.812 04:56:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.812 04:56:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:30.812 ************************************ 00:07:30.812 END TEST accel_xor 00:07:30.812 ************************************ 00:07:30.812 04:56:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.812 04:56:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:30.812 04:56:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.812 04:56:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.812 04:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.812 ************************************ 00:07:30.812 START TEST accel_xor 00:07:30.812 ************************************ 00:07:30.812 04:56:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:30.812 04:56:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:30.812 [2024-07-13 04:56:36.831103] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.812 [2024-07-13 04:56:36.831255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575633 ] 00:07:30.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.812 [2024-07-13 04:56:36.963083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.812 [2024-07-13 04:56:37.221027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.071 04:56:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.971 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.230 04:56:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.230 00:07:33.230 real 0m2.693s 00:07:33.230 user 0m2.449s 00:07:33.230 sys 0m0.242s 00:07:33.230 04:56:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.230 04:56:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.230 ************************************ 00:07:33.230 END TEST accel_xor 00:07:33.230 ************************************ 00:07:33.230 04:56:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.230 04:56:39 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:33.230 04:56:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:33.230 04:56:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.230 04:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.230 ************************************ 00:07:33.230 START TEST accel_dif_verify 00:07:33.230 ************************************ 00:07:33.230 04:56:39 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:33.230 04:56:39 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:33.230 [2024-07-13 04:56:39.566487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:33.230 [2024-07-13 04:56:39.566600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576037 ] 00:07:33.230 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.230 [2024-07-13 04:56:39.692894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.488 [2024-07-13 04:56:39.954189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.746 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:33.747 04:56:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.276 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.277 04:56:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.277 00:07:36.277 real 0m2.689s 00:07:36.277 user 0m0.008s 00:07:36.277 sys 0m0.005s 00:07:36.277 04:56:42 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.277 04:56:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.277 ************************************ 00:07:36.277 END TEST accel_dif_verify 00:07:36.277 ************************************ 00:07:36.277 04:56:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.277 04:56:42 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.277 04:56:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.277 04:56:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.277 04:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.277 ************************************ 00:07:36.277 START TEST accel_dif_generate 00:07:36.277 ************************************ 00:07:36.277 04:56:42 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.277 04:56:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.277 [2024-07-13 04:56:42.303815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.277 [2024-07-13 04:56:42.303970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576331 ] 00:07:36.277 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.277 [2024-07-13 04:56:42.447294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.277 [2024-07-13 04:56:42.708201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.542 04:56:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:39.073 04:56:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.073 00:07:39.073 real 0m2.709s 00:07:39.073 user 0m2.462s 00:07:39.073 sys 0m0.244s 00:07:39.073 04:56:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.073 04:56:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:39.073 ************************************ 00:07:39.073 END TEST accel_dif_generate 00:07:39.073 ************************************ 00:07:39.073 04:56:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.073 04:56:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:39.073 04:56:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:39.073 04:56:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.073 04:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.073 ************************************ 00:07:39.073 START TEST accel_dif_generate_copy 00:07:39.073 ************************************ 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:39.073 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:39.073 [2024-07-13 04:56:45.057899] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.073 [2024-07-13 04:56:45.058030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576741 ] 00:07:39.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.074 [2024-07-13 04:56:45.186585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.074 [2024-07-13 04:56:45.447541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:39.331 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.332 04:56:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.227 00:07:41.227 real 0m2.689s 00:07:41.227 user 0m2.441s 00:07:41.227 sys 0m0.246s 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.227 04:56:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 ************************************ 00:07:41.227 END TEST accel_dif_generate_copy 00:07:41.227 ************************************ 00:07:41.484 04:56:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.484 04:56:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:41.484 04:56:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.484 04:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:41.484 04:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.484 04:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.484 ************************************ 00:07:41.484 START TEST accel_comp 00:07:41.484 ************************************ 00:07:41.484 04:56:47 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.484 04:56:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:41.484 [2024-07-13 04:56:47.793511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.484 [2024-07-13 04:56:47.793630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577026 ] 00:07:41.484 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.484 [2024-07-13 04:56:47.922705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.743 [2024-07-13 04:56:48.188305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:42.001 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.002 04:56:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:44.527 04:56:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.527 00:07:44.527 real 0m2.698s 00:07:44.527 user 0m2.444s 00:07:44.527 sys 0m0.252s 00:07:44.527 04:56:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.527 04:56:50 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:44.527 ************************************ 00:07:44.527 END TEST accel_comp 00:07:44.527 ************************************ 00:07:44.527 04:56:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.527 04:56:50 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.527 04:56:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:44.527 04:56:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.527 04:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.527 ************************************ 00:07:44.527 START TEST accel_decomp 00:07:44.527 ************************************ 00:07:44.527 04:56:50 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:44.527 04:56:50 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:44.527 [2024-07-13 04:56:50.534938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.527 [2024-07-13 04:56:50.535057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577322 ] 00:07:44.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.527 [2024-07-13 04:56:50.667677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.527 [2024-07-13 04:56:50.925087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.786 04:56:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.708 04:56:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.708 00:07:46.708 real 0m2.693s 00:07:46.708 user 0m0.010s 00:07:46.708 sys 0m0.003s 00:07:46.708 04:56:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.708 04:56:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:46.708 ************************************ 00:07:46.708 END TEST accel_decomp 00:07:46.708 ************************************ 00:07:46.966 04:56:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.966 04:56:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.966 04:56:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:46.966 04:56:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.966 04:56:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.966 ************************************ 00:07:46.966 START TEST accel_decomp_full 00:07:46.966 ************************************ 00:07:46.966 04:56:53 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:46.966 04:56:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:46.966 [2024-07-13 04:56:53.270168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.966 [2024-07-13 04:56:53.270287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577732 ] 00:07:46.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.966 [2024-07-13 04:56:53.397517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.223 [2024-07-13 04:56:53.659241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.482 04:56:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.009 04:56:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.009 00:07:50.009 real 0m2.699s 00:07:50.009 user 0m2.459s 00:07:50.009 sys 0m0.237s 00:07:50.009 04:56:55 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.009 04:56:55 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:50.009 ************************************ 00:07:50.009 END TEST accel_decomp_full 00:07:50.009 ************************************ 00:07:50.009 04:56:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.009 04:56:55 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.009 04:56:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:50.009 04:56:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.009 04:56:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.009 ************************************ 00:07:50.009 START TEST accel_decomp_mcore 00:07:50.009 ************************************ 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:50.009 04:56:55 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:50.009 [2024-07-13 04:56:56.015972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.009 [2024-07-13 04:56:56.016126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578024 ] 00:07:50.009 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.009 [2024-07-13 04:56:56.163984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.009 [2024-07-13 04:56:56.432935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.009 [2024-07-13 04:56:56.432977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.009 [2024-07-13 04:56:56.433024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.009 [2024-07-13 04:56:56.433034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.268 04:56:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.796 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.797 00:07:52.797 real 0m2.755s 00:07:52.797 user 0m0.013s 00:07:52.797 sys 0m0.001s 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.797 04:56:58 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:52.797 ************************************ 00:07:52.797 END TEST accel_decomp_mcore 00:07:52.797 ************************************ 00:07:52.797 04:56:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.797 04:56:58 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.797 04:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:52.797 04:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.797 04:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.797 ************************************ 00:07:52.797 START TEST accel_decomp_full_mcore 00:07:52.797 ************************************ 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:52.797 04:56:58 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:52.797 [2024-07-13 04:56:58.819913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.797 [2024-07-13 04:56:58.820053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578433 ] 00:07:52.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.797 [2024-07-13 04:56:58.950612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.797 [2024-07-13 04:56:59.218384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.797 [2024-07-13 04:56:59.218440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.797 [2024-07-13 04:56:59.218494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.797 [2024-07-13 04:56:59.218505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.055 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.056 04:56:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.583 00:07:55.583 real 0m2.759s 00:07:55.583 user 0m0.013s 00:07:55.583 sys 0m0.003s 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.583 04:57:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:55.583 ************************************ 00:07:55.584 END TEST accel_decomp_full_mcore 00:07:55.584 ************************************ 00:07:55.584 04:57:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.584 04:57:01 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.584 04:57:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:55.584 04:57:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.584 04:57:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 ************************************ 00:07:55.584 START TEST accel_decomp_mthread 00:07:55.584 ************************************ 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.584 04:57:01 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.584 [2024-07-13 04:57:01.627049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.584 [2024-07-13 04:57:01.627198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578823 ] 00:07:55.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.584 [2024-07-13 04:57:01.762880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.584 [2024-07-13 04:57:02.017938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.842 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.843 04:57:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:58.369 04:57:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.369 00:07:58.369 real 0m2.705s 00:07:58.369 user 0m2.452s 00:07:58.369 sys 0m0.250s 00:07:58.370 04:57:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.370 04:57:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:58.370 ************************************ 00:07:58.370 END TEST accel_decomp_mthread 00:07:58.370 ************************************ 00:07:58.370 04:57:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.370 04:57:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.370 04:57:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:58.370 04:57:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.370 04:57:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.370 ************************************ 00:07:58.370 START TEST accel_decomp_full_mthread 00:07:58.370 ************************************ 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:58.370 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:58.370 [2024-07-13 04:57:04.371936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.370 [2024-07-13 04:57:04.372080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579255 ] 00:07:58.370 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.370 [2024-07-13 04:57:04.500659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.370 [2024-07-13 04:57:04.761768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.628 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.628 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:58.629 04:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.157 00:08:01.157 real 0m2.736s 00:08:01.157 user 0m2.500s 00:08:01.157 sys 0m0.232s 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.157 04:57:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:01.157 ************************************ 00:08:01.157 END TEST accel_decomp_full_mthread 00:08:01.157 ************************************ 00:08:01.157 04:57:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.157 04:57:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:01.157 04:57:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:01.157 04:57:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:01.157 04:57:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:01.157 04:57:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.157 04:57:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.157 04:57:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.157 04:57:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.157 04:57:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.157 04:57:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.157 04:57:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.157 04:57:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:01.157 04:57:07 accel -- accel/accel.sh@41 -- # jq -r . 00:08:01.157 ************************************ 00:08:01.157 START TEST accel_dif_functional_tests 00:08:01.157 ************************************ 00:08:01.157 04:57:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:01.157 [2024-07-13 04:57:07.188896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.157 [2024-07-13 04:57:07.189030] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579550 ] 00:08:01.157 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.157 [2024-07-13 04:57:07.318575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.157 [2024-07-13 04:57:07.585499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.157 [2024-07-13 04:57:07.585551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.157 [2024-07-13 04:57:07.585558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.723 00:08:01.723 00:08:01.723 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.723 http://cunit.sourceforge.net/ 00:08:01.723 00:08:01.723 00:08:01.723 Suite: accel_dif 00:08:01.723 Test: verify: DIF generated, GUARD check ...passed 00:08:01.723 Test: verify: DIF generated, APPTAG check ...passed 00:08:01.723 Test: verify: DIF generated, REFTAG check ...passed 00:08:01.723 Test: verify: DIF not generated, GUARD check ...[2024-07-13 04:57:07.940782] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:01.723 passed 00:08:01.723 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 04:57:07.940905] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:01.723 passed 00:08:01.723 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 04:57:07.940985] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:01.723 passed 00:08:01.723 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:01.724 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 04:57:07.941120] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:01.724 passed 00:08:01.724 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:01.724 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:01.724 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:01.724 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 04:57:07.941389] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:01.724 passed 00:08:01.724 Test: verify copy: DIF generated, GUARD check ...passed 00:08:01.724 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:01.724 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:01.724 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 04:57:07.941695] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:01.724 passed 00:08:01.724 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 04:57:07.941780] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:01.724 passed 00:08:01.724 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 04:57:07.941864] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:01.724 passed 00:08:01.724 Test: generate copy: DIF generated, GUARD check ...passed 00:08:01.724 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:01.724 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:01.724 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:01.724 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:01.724 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:01.724 Test: generate copy: iovecs-len validate ...[2024-07-13 04:57:07.942370] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:01.724 passed 00:08:01.724 Test: generate copy: buffer alignment validate ...passed 00:08:01.724 00:08:01.724 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.724 suites 1 1 n/a 0 0 00:08:01.724 tests 26 26 26 0 0 00:08:01.724 asserts 115 115 115 0 n/a 00:08:01.724 00:08:01.724 Elapsed time = 0.005 seconds 00:08:03.097 00:08:03.097 real 0m2.151s 00:08:03.098 user 0m4.248s 00:08:03.098 sys 0m0.308s 00:08:03.098 04:57:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.098 04:57:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:03.098 ************************************ 00:08:03.098 END TEST accel_dif_functional_tests 00:08:03.098 ************************************ 00:08:03.098 04:57:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.098 00:08:03.098 real 1m4.966s 00:08:03.098 user 1m11.917s 00:08:03.098 sys 0m7.216s 00:08:03.098 04:57:09 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.098 04:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.098 ************************************ 00:08:03.098 END TEST accel 00:08:03.098 ************************************ 00:08:03.098 04:57:09 -- common/autotest_common.sh@1142 -- # return 0 00:08:03.098 04:57:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:03.098 04:57:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.098 04:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.098 04:57:09 -- common/autotest_common.sh@10 -- # set +x 00:08:03.098 ************************************ 00:08:03.098 START TEST accel_rpc 00:08:03.098 ************************************ 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:03.098 * Looking for test storage... 00:08:03.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:03.098 04:57:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:03.098 04:57:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=580377 00:08:03.098 04:57:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:03.098 04:57:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 580377 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 580377 ']' 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.098 04:57:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.098 [2024-07-13 04:57:09.462453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.098 [2024-07-13 04:57:09.462611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580377 ] 00:08:03.098 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.098 [2024-07-13 04:57:09.595970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.356 [2024-07-13 04:57:09.855783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.921 04:57:10 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.921 04:57:10 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:03.921 04:57:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:03.921 04:57:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:03.921 04:57:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:03.921 04:57:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:03.921 04:57:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:03.921 04:57:10 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.921 04:57:10 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.921 04:57:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.921 ************************************ 00:08:03.921 START TEST accel_assign_opcode 00:08:03.921 ************************************ 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:04.179 [2024-07-13 04:57:10.426172] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:04.179 [2024-07-13 04:57:10.434130] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.179 04:57:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:05.116 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.117 software 00:08:05.117 00:08:05.117 real 0m0.903s 00:08:05.117 user 0m0.040s 00:08:05.117 sys 0m0.008s 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.117 04:57:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:05.117 ************************************ 00:08:05.117 END TEST accel_assign_opcode 00:08:05.117 ************************************ 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:05.117 04:57:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 580377 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 580377 ']' 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 580377 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 580377 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 580377' 00:08:05.117 killing process with pid 580377 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@967 -- # kill 580377 00:08:05.117 04:57:11 accel_rpc -- common/autotest_common.sh@972 -- # wait 580377 00:08:07.646 00:08:07.646 real 0m4.611s 00:08:07.646 user 0m4.576s 00:08:07.646 sys 0m0.647s 00:08:07.646 04:57:13 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.646 04:57:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.646 ************************************ 00:08:07.646 END TEST accel_rpc 00:08:07.646 ************************************ 00:08:07.646 04:57:13 -- common/autotest_common.sh@1142 -- # return 0 00:08:07.646 04:57:13 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:07.646 04:57:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.646 04:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.646 04:57:13 -- common/autotest_common.sh@10 -- # set +x 00:08:07.646 ************************************ 00:08:07.646 START TEST app_cmdline 00:08:07.646 ************************************ 00:08:07.646 04:57:13 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:07.646 * Looking for test storage... 00:08:07.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:07.646 04:57:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:07.646 04:57:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=580979 00:08:07.646 04:57:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:07.646 04:57:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 580979 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 580979 ']' 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.646 04:57:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:07.646 [2024-07-13 04:57:14.137836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.646 [2024-07-13 04:57:14.137998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580979 ] 00:08:07.903 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.903 [2024-07-13 04:57:14.269703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.162 [2024-07-13 04:57:14.528416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.098 04:57:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.098 04:57:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:09.098 04:57:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:09.356 { 00:08:09.356 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:09.356 "fields": { 00:08:09.356 "major": 24, 00:08:09.356 "minor": 9, 00:08:09.356 "patch": 0, 00:08:09.356 "suffix": "-pre", 00:08:09.356 "commit": "719d03c6a" 00:08:09.356 } 00:08:09.356 } 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:09.356 04:57:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.356 04:57:15 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.632 request: 00:08:09.632 { 00:08:09.632 "method": "env_dpdk_get_mem_stats", 00:08:09.632 "req_id": 1 00:08:09.632 } 00:08:09.632 Got JSON-RPC error response 00:08:09.632 response: 00:08:09.632 { 00:08:09.632 "code": -32601, 00:08:09.632 "message": "Method not found" 00:08:09.632 } 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.632 04:57:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 580979 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 580979 ']' 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 580979 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 580979 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 580979' 00:08:09.632 killing process with pid 580979 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 580979 00:08:09.632 04:57:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 580979 00:08:12.180 00:08:12.180 real 0m4.576s 00:08:12.180 user 0m5.048s 00:08:12.180 sys 0m0.676s 00:08:12.180 04:57:18 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.180 04:57:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:12.180 ************************************ 00:08:12.180 END TEST app_cmdline 00:08:12.180 ************************************ 00:08:12.180 04:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:12.180 04:57:18 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:12.180 04:57:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.180 04:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.180 04:57:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.180 ************************************ 00:08:12.180 START TEST version 00:08:12.180 ************************************ 00:08:12.180 04:57:18 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:12.181 * Looking for test storage... 00:08:12.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:12.181 04:57:18 version -- app/version.sh@17 -- # get_header_version major 00:08:12.181 04:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:12.181 04:57:18 version -- app/version.sh@14 -- # cut -f2 00:08:12.181 04:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.181 04:57:18 version -- app/version.sh@17 -- # major=24 00:08:12.439 04:57:18 version -- app/version.sh@18 -- # get_header_version minor 00:08:12.439 04:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # cut -f2 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.439 04:57:18 version -- app/version.sh@18 -- # minor=9 00:08:12.439 04:57:18 version -- app/version.sh@19 -- # get_header_version patch 00:08:12.439 04:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # cut -f2 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.439 04:57:18 version -- app/version.sh@19 -- # patch=0 00:08:12.439 04:57:18 version -- app/version.sh@20 -- # get_header_version suffix 00:08:12.439 04:57:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # cut -f2 00:08:12.439 04:57:18 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.439 04:57:18 version -- app/version.sh@20 -- # suffix=-pre 00:08:12.439 04:57:18 version -- app/version.sh@22 -- # version=24.9 00:08:12.439 04:57:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:12.439 04:57:18 version -- app/version.sh@28 -- # version=24.9rc0 00:08:12.439 04:57:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:12.439 04:57:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:12.439 04:57:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:12.439 04:57:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:12.439 00:08:12.439 real 0m0.115s 00:08:12.439 user 0m0.062s 00:08:12.439 sys 0m0.075s 00:08:12.439 04:57:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.439 04:57:18 version -- common/autotest_common.sh@10 -- # set +x 00:08:12.439 ************************************ 00:08:12.439 END TEST version 00:08:12.439 ************************************ 00:08:12.439 04:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:12.439 04:57:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@198 -- # uname -s 00:08:12.439 04:57:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:12.439 04:57:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:12.439 04:57:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:12.439 04:57:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:12.439 04:57:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.439 04:57:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.439 04:57:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:12.439 04:57:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:12.439 04:57:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:12.439 04:57:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.439 04:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.439 04:57:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.439 ************************************ 00:08:12.439 START TEST nvmf_tcp 00:08:12.439 ************************************ 00:08:12.439 04:57:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:12.439 * Looking for test storage... 00:08:12.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.439 04:57:18 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.440 04:57:18 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.440 04:57:18 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.440 04:57:18 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.440 04:57:18 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.440 04:57:18 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.440 04:57:18 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.440 04:57:18 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:12.440 04:57:18 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:12.440 04:57:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.440 04:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:12.440 04:57:18 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:12.440 04:57:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.440 04:57:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.440 04:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.440 ************************************ 00:08:12.440 START TEST nvmf_example 00:08:12.440 ************************************ 00:08:12.440 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:12.440 * Looking for test storage... 00:08:12.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.440 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.700 04:57:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.701 04:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:14.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:14.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.625 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:14.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:14.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:08:14.626 00:08:14.626 --- 10.0.0.2 ping statistics --- 00:08:14.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.626 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:08:14.626 00:08:14.626 --- 10.0.0.1 ping statistics --- 00:08:14.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.626 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=583276 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 583276 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 583276 ']' 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.626 04:57:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:14.885 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.819 04:57:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:15.819 04:57:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:15.819 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.030 Initializing NVMe Controllers 00:08:28.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.030 Initialization complete. Launching workers. 00:08:28.030 ======================================================== 00:08:28.030 Latency(us) 00:08:28.030 Device Information : IOPS MiB/s Average min max 00:08:28.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11583.17 45.25 5526.50 1264.95 15850.47 00:08:28.030 ======================================================== 00:08:28.030 Total : 11583.17 45.25 5526.50 1264.95 15850.47 00:08:28.030 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.030 rmmod nvme_tcp 00:08:28.030 rmmod nvme_fabrics 00:08:28.030 rmmod nvme_keyring 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 583276 ']' 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 583276 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 583276 ']' 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 583276 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 583276 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 583276' 00:08:28.030 killing process with pid 583276 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 583276 00:08:28.030 04:57:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 583276 00:08:28.030 nvmf threads initialize successfully 00:08:28.030 bdev subsystem init successfully 00:08:28.030 created a nvmf target service 00:08:28.030 create targets's poll groups done 00:08:28.030 all subsystems of target started 00:08:28.030 nvmf target is running 00:08:28.030 all subsystems of target stopped 00:08:28.030 destroy targets's poll groups done 00:08:28.030 destroyed the nvmf target service 00:08:28.030 bdev subsystem finish successfully 00:08:28.030 nvmf threads destroy successfully 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.031 04:57:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:29.410 00:08:29.410 real 0m16.931s 00:08:29.410 user 0m45.046s 00:08:29.410 sys 0m4.339s 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.410 04:57:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:29.410 ************************************ 00:08:29.410 END TEST nvmf_example 00:08:29.410 ************************************ 00:08:29.410 04:57:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.410 04:57:35 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.410 04:57:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.410 04:57:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.410 04:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.410 ************************************ 00:08:29.410 START TEST nvmf_filesystem 00:08:29.410 ************************************ 00:08:29.410 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.672 * Looking for test storage... 00:08:29.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:29.672 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:29.673 #define SPDK_CONFIG_H 00:08:29.673 #define SPDK_CONFIG_APPS 1 00:08:29.673 #define SPDK_CONFIG_ARCH native 00:08:29.673 #define SPDK_CONFIG_ASAN 1 00:08:29.673 #undef SPDK_CONFIG_AVAHI 00:08:29.673 #undef SPDK_CONFIG_CET 00:08:29.673 #define SPDK_CONFIG_COVERAGE 1 00:08:29.673 #define SPDK_CONFIG_CROSS_PREFIX 00:08:29.673 #undef SPDK_CONFIG_CRYPTO 00:08:29.673 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:29.673 #undef SPDK_CONFIG_CUSTOMOCF 00:08:29.673 #undef SPDK_CONFIG_DAOS 00:08:29.673 #define SPDK_CONFIG_DAOS_DIR 00:08:29.673 #define SPDK_CONFIG_DEBUG 1 00:08:29.673 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:29.673 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:29.673 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:29.673 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:29.673 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:29.673 #undef SPDK_CONFIG_DPDK_UADK 00:08:29.673 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.673 #define SPDK_CONFIG_EXAMPLES 1 00:08:29.673 #undef SPDK_CONFIG_FC 00:08:29.673 #define SPDK_CONFIG_FC_PATH 00:08:29.673 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:29.673 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:29.673 #undef SPDK_CONFIG_FUSE 00:08:29.673 #undef SPDK_CONFIG_FUZZER 00:08:29.673 #define SPDK_CONFIG_FUZZER_LIB 00:08:29.673 #undef SPDK_CONFIG_GOLANG 00:08:29.673 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:29.673 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:29.673 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:29.673 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:29.673 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:29.673 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:29.673 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:29.673 #define SPDK_CONFIG_IDXD 1 00:08:29.673 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:29.673 #undef SPDK_CONFIG_IPSEC_MB 00:08:29.673 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:29.673 #define SPDK_CONFIG_ISAL 1 00:08:29.673 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:29.673 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:29.673 #define SPDK_CONFIG_LIBDIR 00:08:29.673 #undef SPDK_CONFIG_LTO 00:08:29.673 #define SPDK_CONFIG_MAX_LCORES 128 00:08:29.673 #define SPDK_CONFIG_NVME_CUSE 1 00:08:29.673 #undef SPDK_CONFIG_OCF 00:08:29.673 #define SPDK_CONFIG_OCF_PATH 00:08:29.673 #define SPDK_CONFIG_OPENSSL_PATH 00:08:29.673 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:29.673 #define SPDK_CONFIG_PGO_DIR 00:08:29.673 #undef SPDK_CONFIG_PGO_USE 00:08:29.673 #define SPDK_CONFIG_PREFIX /usr/local 00:08:29.673 #undef SPDK_CONFIG_RAID5F 00:08:29.673 #undef SPDK_CONFIG_RBD 00:08:29.673 #define SPDK_CONFIG_RDMA 1 00:08:29.673 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:29.673 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:29.673 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:29.673 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:29.673 #define SPDK_CONFIG_SHARED 1 00:08:29.673 #undef SPDK_CONFIG_SMA 00:08:29.673 #define SPDK_CONFIG_TESTS 1 00:08:29.673 #undef SPDK_CONFIG_TSAN 00:08:29.673 #define SPDK_CONFIG_UBLK 1 00:08:29.673 #define SPDK_CONFIG_UBSAN 1 00:08:29.673 #undef SPDK_CONFIG_UNIT_TESTS 00:08:29.673 #undef SPDK_CONFIG_URING 00:08:29.673 #define SPDK_CONFIG_URING_PATH 00:08:29.673 #undef SPDK_CONFIG_URING_ZNS 00:08:29.673 #undef SPDK_CONFIG_USDT 00:08:29.673 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:29.673 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:29.673 #undef SPDK_CONFIG_VFIO_USER 00:08:29.673 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:29.673 #define SPDK_CONFIG_VHOST 1 00:08:29.673 #define SPDK_CONFIG_VIRTIO 1 00:08:29.673 #undef SPDK_CONFIG_VTUNE 00:08:29.673 #define SPDK_CONFIG_VTUNE_DIR 00:08:29.673 #define SPDK_CONFIG_WERROR 1 00:08:29.673 #define SPDK_CONFIG_WPDK_DIR 00:08:29.673 #undef SPDK_CONFIG_XNVME 00:08:29.673 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:29.673 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.674 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 585138 ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 585138 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.TlU4Pp 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.TlU4Pp/tests/target /tmp/spdk.TlU4Pp 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55284760576 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6709948416 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996537344 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=819200 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:08:29.675 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:29.676 * Looking for test storage... 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55284760576 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8924540928 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.676 04:57:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.676 04:57:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.677 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.677 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.677 04:57:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.677 04:57:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.577 04:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.577 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.577 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.577 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.577 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.577 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:08:31.837 00:08:31.837 --- 10.0.0.2 ping statistics --- 00:08:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.837 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:31.837 00:08:31.837 --- 10.0.0.1 ping statistics --- 00:08:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.837 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.837 ************************************ 00:08:31.837 START TEST nvmf_filesystem_no_in_capsule 00:08:31.837 ************************************ 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=586792 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 586792 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 586792 ']' 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.837 04:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.837 [2024-07-13 04:57:38.276912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.837 [2024-07-13 04:57:38.277058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.097 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.097 [2024-07-13 04:57:38.419245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.357 [2024-07-13 04:57:38.665085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.357 [2024-07-13 04:57:38.665164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.357 [2024-07-13 04:57:38.665193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.357 [2024-07-13 04:57:38.665215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.357 [2024-07-13 04:57:38.665237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.357 [2024-07-13 04:57:38.665360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.357 [2024-07-13 04:57:38.665432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.357 [2024-07-13 04:57:38.665515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.357 [2024-07-13 04:57:38.665524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.970 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 [2024-07-13 04:57:39.248425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.971 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.538 Malloc1 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.538 [2024-07-13 04:57:39.823676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:33.538 { 00:08:33.538 "name": "Malloc1", 00:08:33.538 "aliases": [ 00:08:33.538 "b0dadb9e-dfee-46be-b0c9-7158b53aad8e" 00:08:33.538 ], 00:08:33.538 "product_name": "Malloc disk", 00:08:33.538 "block_size": 512, 00:08:33.538 "num_blocks": 1048576, 00:08:33.538 "uuid": "b0dadb9e-dfee-46be-b0c9-7158b53aad8e", 00:08:33.538 "assigned_rate_limits": { 00:08:33.538 "rw_ios_per_sec": 0, 00:08:33.538 "rw_mbytes_per_sec": 0, 00:08:33.538 "r_mbytes_per_sec": 0, 00:08:33.538 "w_mbytes_per_sec": 0 00:08:33.538 }, 00:08:33.538 "claimed": true, 00:08:33.538 "claim_type": "exclusive_write", 00:08:33.538 "zoned": false, 00:08:33.538 "supported_io_types": { 00:08:33.538 "read": true, 00:08:33.538 "write": true, 00:08:33.538 "unmap": true, 00:08:33.538 "flush": true, 00:08:33.538 "reset": true, 00:08:33.538 "nvme_admin": false, 00:08:33.538 "nvme_io": false, 00:08:33.538 "nvme_io_md": false, 00:08:33.538 "write_zeroes": true, 00:08:33.538 "zcopy": true, 00:08:33.538 "get_zone_info": false, 00:08:33.538 "zone_management": false, 00:08:33.538 "zone_append": false, 00:08:33.538 "compare": false, 00:08:33.538 "compare_and_write": false, 00:08:33.538 "abort": true, 00:08:33.538 "seek_hole": false, 00:08:33.538 "seek_data": false, 00:08:33.538 "copy": true, 00:08:33.538 "nvme_iov_md": false 00:08:33.538 }, 00:08:33.538 "memory_domains": [ 00:08:33.538 { 00:08:33.538 "dma_device_id": "system", 00:08:33.538 "dma_device_type": 1 00:08:33.538 }, 00:08:33.538 { 00:08:33.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.538 "dma_device_type": 2 00:08:33.538 } 00:08:33.538 ], 00:08:33.538 "driver_specific": {} 00:08:33.538 } 00:08:33.538 ]' 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:33.538 04:57:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.472 04:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.472 04:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:34.472 04:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.472 04:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:34.472 04:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:36.379 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:36.637 04:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:37.572 04:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.511 ************************************ 00:08:38.511 START TEST filesystem_ext4 00:08:38.511 ************************************ 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:38.511 04:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:38.511 mke2fs 1.46.5 (30-Dec-2021) 00:08:38.770 Discarding device blocks: 0/522240 done 00:08:38.770 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:38.770 Filesystem UUID: f2908fa0-e9a2-4667-8d2a-a8b8cdcf2fc9 00:08:38.770 Superblock backups stored on blocks: 00:08:38.770 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:38.770 00:08:38.770 Allocating group tables: 0/64 done 00:08:38.770 Writing inode tables: 0/64 done 00:08:38.770 Creating journal (8192 blocks): done 00:08:38.770 Writing superblocks and filesystem accounting information: 0/64 done 00:08:38.770 00:08:38.770 04:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:38.770 04:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.710 04:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.710 04:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 586792 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.710 00:08:39.710 real 0m1.124s 00:08:39.710 user 0m0.020s 00:08:39.710 sys 0m0.048s 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:39.710 ************************************ 00:08:39.710 END TEST filesystem_ext4 00:08:39.710 ************************************ 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.710 ************************************ 00:08:39.710 START TEST filesystem_btrfs 00:08:39.710 ************************************ 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:39.710 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:39.969 btrfs-progs v6.6.2 00:08:39.969 See https://btrfs.readthedocs.io for more information. 00:08:39.969 00:08:39.969 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:39.969 NOTE: several default settings have changed in version 5.15, please make sure 00:08:39.969 this does not affect your deployments: 00:08:39.969 - DUP for metadata (-m dup) 00:08:39.969 - enabled no-holes (-O no-holes) 00:08:39.969 - enabled free-space-tree (-R free-space-tree) 00:08:39.969 00:08:39.969 Label: (null) 00:08:39.970 UUID: 025c1d56-d795-4108-89fa-f9f2f36316a6 00:08:39.970 Node size: 16384 00:08:39.970 Sector size: 4096 00:08:39.970 Filesystem size: 510.00MiB 00:08:39.970 Block group profiles: 00:08:39.970 Data: single 8.00MiB 00:08:39.970 Metadata: DUP 32.00MiB 00:08:39.970 System: DUP 8.00MiB 00:08:39.970 SSD detected: yes 00:08:39.970 Zoned device: no 00:08:39.970 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:39.970 Runtime features: free-space-tree 00:08:39.970 Checksum: crc32c 00:08:39.970 Number of devices: 1 00:08:39.970 Devices: 00:08:39.970 ID SIZE PATH 00:08:39.970 1 510.00MiB /dev/nvme0n1p1 00:08:39.970 00:08:39.970 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:39.970 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:40.538 04:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 586792 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.538 00:08:40.538 real 0m0.917s 00:08:40.538 user 0m0.018s 00:08:40.538 sys 0m0.112s 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:40.538 ************************************ 00:08:40.538 END TEST filesystem_btrfs 00:08:40.538 ************************************ 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.538 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.796 ************************************ 00:08:40.796 START TEST filesystem_xfs 00:08:40.796 ************************************ 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:40.796 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:40.796 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:40.796 = sectsz=512 attr=2, projid32bit=1 00:08:40.796 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:40.796 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:40.796 data = bsize=4096 blocks=130560, imaxpct=25 00:08:40.796 = sunit=0 swidth=0 blks 00:08:40.796 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:40.796 log =internal log bsize=4096 blocks=16384, version=2 00:08:40.796 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:40.797 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:41.736 Discarding blocks...Done. 00:08:41.736 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:41.736 04:57:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:44.291 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.292 00:08:44.292 real 0m3.265s 00:08:44.292 user 0m0.019s 00:08:44.292 sys 0m0.060s 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.292 ************************************ 00:08:44.292 END TEST filesystem_xfs 00:08:44.292 ************************************ 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 586792 ']' 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 586792' 00:08:44.292 killing process with pid 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 586792 00:08:44.292 04:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 586792 00:08:46.831 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:46.831 00:08:46.831 real 0m15.140s 00:08:46.831 user 0m55.990s 00:08:46.831 sys 0m2.004s 00:08:46.831 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.831 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.831 ************************************ 00:08:46.831 END TEST nvmf_filesystem_no_in_capsule 00:08:46.831 ************************************ 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.090 ************************************ 00:08:47.090 START TEST nvmf_filesystem_in_capsule 00:08:47.090 ************************************ 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:47.090 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=588809 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 588809 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 588809 ']' 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.091 04:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.091 [2024-07-13 04:57:53.470165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:47.091 [2024-07-13 04:57:53.470337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.351 [2024-07-13 04:57:53.607733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.610 [2024-07-13 04:57:53.858310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.610 [2024-07-13 04:57:53.858374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.610 [2024-07-13 04:57:53.858413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.610 [2024-07-13 04:57:53.858430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.610 [2024-07-13 04:57:53.858448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.610 [2024-07-13 04:57:53.858642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.610 [2024-07-13 04:57:53.858703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.610 [2024-07-13 04:57:53.858744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.610 [2024-07-13 04:57:53.858755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.184 [2024-07-13 04:57:54.419348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.184 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.755 Malloc1 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.755 04:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.755 [2024-07-13 04:57:55.007745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:48.755 { 00:08:48.755 "name": "Malloc1", 00:08:48.755 "aliases": [ 00:08:48.755 "c5dfa3b6-0018-49f6-bdf6-f2cfd31a2773" 00:08:48.755 ], 00:08:48.755 "product_name": "Malloc disk", 00:08:48.755 "block_size": 512, 00:08:48.755 "num_blocks": 1048576, 00:08:48.755 "uuid": "c5dfa3b6-0018-49f6-bdf6-f2cfd31a2773", 00:08:48.755 "assigned_rate_limits": { 00:08:48.755 "rw_ios_per_sec": 0, 00:08:48.755 "rw_mbytes_per_sec": 0, 00:08:48.755 "r_mbytes_per_sec": 0, 00:08:48.755 "w_mbytes_per_sec": 0 00:08:48.755 }, 00:08:48.755 "claimed": true, 00:08:48.755 "claim_type": "exclusive_write", 00:08:48.755 "zoned": false, 00:08:48.755 "supported_io_types": { 00:08:48.755 "read": true, 00:08:48.755 "write": true, 00:08:48.755 "unmap": true, 00:08:48.755 "flush": true, 00:08:48.755 "reset": true, 00:08:48.755 "nvme_admin": false, 00:08:48.755 "nvme_io": false, 00:08:48.755 "nvme_io_md": false, 00:08:48.755 "write_zeroes": true, 00:08:48.755 "zcopy": true, 00:08:48.755 "get_zone_info": false, 00:08:48.755 "zone_management": false, 00:08:48.755 "zone_append": false, 00:08:48.755 "compare": false, 00:08:48.755 "compare_and_write": false, 00:08:48.755 "abort": true, 00:08:48.755 "seek_hole": false, 00:08:48.755 "seek_data": false, 00:08:48.755 "copy": true, 00:08:48.755 "nvme_iov_md": false 00:08:48.755 }, 00:08:48.755 "memory_domains": [ 00:08:48.755 { 00:08:48.755 "dma_device_id": "system", 00:08:48.755 "dma_device_type": 1 00:08:48.755 }, 00:08:48.755 { 00:08:48.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.755 "dma_device_type": 2 00:08:48.755 } 00:08:48.755 ], 00:08:48.755 "driver_specific": {} 00:08:48.755 } 00:08:48.755 ]' 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:48.755 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.325 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.325 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:49.325 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.325 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:49.325 04:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:51.867 04:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:52.434 04:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:53.811 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:53.811 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:53.811 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:53.811 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.811 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.811 ************************************ 00:08:53.811 START TEST filesystem_in_capsule_ext4 00:08:53.812 ************************************ 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:53.812 04:57:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:53.812 mke2fs 1.46.5 (30-Dec-2021) 00:08:53.812 Discarding device blocks: 0/522240 done 00:08:53.812 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:53.812 Filesystem UUID: 82239ab9-132c-460d-8140-f3884f91a79e 00:08:53.812 Superblock backups stored on blocks: 00:08:53.812 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:53.812 00:08:53.812 Allocating group tables: 0/64 done 00:08:53.812 Writing inode tables: 0/64 done 00:08:53.812 Creating journal (8192 blocks): done 00:08:53.812 Writing superblocks and filesystem accounting information: 0/64 done 00:08:53.812 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:53.812 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 588809 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:54.071 00:08:54.071 real 0m0.393s 00:08:54.071 user 0m0.017s 00:08:54.071 sys 0m0.048s 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:54.071 ************************************ 00:08:54.071 END TEST filesystem_in_capsule_ext4 00:08:54.071 ************************************ 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.071 ************************************ 00:08:54.071 START TEST filesystem_in_capsule_btrfs 00:08:54.071 ************************************ 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:54.071 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:54.330 btrfs-progs v6.6.2 00:08:54.330 See https://btrfs.readthedocs.io for more information. 00:08:54.330 00:08:54.330 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:54.330 NOTE: several default settings have changed in version 5.15, please make sure 00:08:54.330 this does not affect your deployments: 00:08:54.330 - DUP for metadata (-m dup) 00:08:54.330 - enabled no-holes (-O no-holes) 00:08:54.330 - enabled free-space-tree (-R free-space-tree) 00:08:54.330 00:08:54.330 Label: (null) 00:08:54.330 UUID: 6ed069c8-68ba-404a-81db-590052e66021 00:08:54.330 Node size: 16384 00:08:54.330 Sector size: 4096 00:08:54.330 Filesystem size: 510.00MiB 00:08:54.330 Block group profiles: 00:08:54.330 Data: single 8.00MiB 00:08:54.330 Metadata: DUP 32.00MiB 00:08:54.330 System: DUP 8.00MiB 00:08:54.330 SSD detected: yes 00:08:54.330 Zoned device: no 00:08:54.330 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:54.330 Runtime features: free-space-tree 00:08:54.330 Checksum: crc32c 00:08:54.330 Number of devices: 1 00:08:54.330 Devices: 00:08:54.330 ID SIZE PATH 00:08:54.330 1 510.00MiB /dev/nvme0n1p1 00:08:54.330 00:08:54.330 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:54.330 04:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 588809 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.264 00:08:55.264 real 0m1.298s 00:08:55.264 user 0m0.016s 00:08:55.264 sys 0m0.114s 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.264 ************************************ 00:08:55.264 END TEST filesystem_in_capsule_btrfs 00:08:55.264 ************************************ 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.264 ************************************ 00:08:55.264 START TEST filesystem_in_capsule_xfs 00:08:55.264 ************************************ 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:55.264 04:58:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:55.522 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:55.522 = sectsz=512 attr=2, projid32bit=1 00:08:55.522 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:55.522 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:55.522 data = bsize=4096 blocks=130560, imaxpct=25 00:08:55.522 = sunit=0 swidth=0 blks 00:08:55.522 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:55.522 log =internal log bsize=4096 blocks=16384, version=2 00:08:55.522 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:55.522 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:56.459 Discarding blocks...Done. 00:08:56.459 04:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:56.459 04:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.834 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 588809 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.094 00:08:58.094 real 0m2.689s 00:08:58.094 user 0m0.028s 00:08:58.094 sys 0m0.049s 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 ************************************ 00:08:58.094 END TEST filesystem_in_capsule_xfs 00:08:58.094 ************************************ 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:58.094 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:58.354 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:58.354 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 588809 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 588809 ']' 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 588809 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 588809 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 588809' 00:08:58.614 killing process with pid 588809 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 588809 00:08:58.614 04:58:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 588809 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:01.151 00:09:01.151 real 0m14.144s 00:09:01.151 user 0m52.009s 00:09:01.151 sys 0m2.010s 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:01.151 ************************************ 00:09:01.151 END TEST nvmf_filesystem_in_capsule 00:09:01.151 ************************************ 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.151 rmmod nvme_tcp 00:09:01.151 rmmod nvme_fabrics 00:09:01.151 rmmod nvme_keyring 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.151 04:58:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.694 04:58:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.694 00:09:03.694 real 0m33.782s 00:09:03.694 user 1m48.918s 00:09:03.694 sys 0m5.590s 00:09:03.694 04:58:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.694 04:58:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.694 ************************************ 00:09:03.694 END TEST nvmf_filesystem 00:09:03.694 ************************************ 00:09:03.694 04:58:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.694 04:58:09 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:03.694 04:58:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.694 04:58:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.695 04:58:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.695 ************************************ 00:09:03.695 START TEST nvmf_target_discovery 00:09:03.695 ************************************ 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:03.695 * Looking for test storage... 00:09:03.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.695 04:58:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:09:05.623 00:09:05.623 --- 10.0.0.2 ping statistics --- 00:09:05.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.623 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:05.623 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:09:05.623 00:09:05.623 --- 10.0.0.1 ping statistics --- 00:09:05.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.624 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=592561 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 592561 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 592561 ']' 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.624 04:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.624 [2024-07-13 04:58:11.987033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:05.624 [2024-07-13 04:58:11.987167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.624 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.884 [2024-07-13 04:58:12.126951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.143 [2024-07-13 04:58:12.395255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.143 [2024-07-13 04:58:12.395341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.143 [2024-07-13 04:58:12.395370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.143 [2024-07-13 04:58:12.395391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.143 [2024-07-13 04:58:12.395413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.143 [2024-07-13 04:58:12.395541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.143 [2024-07-13 04:58:12.395603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.143 [2024-07-13 04:58:12.395647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.143 [2024-07-13 04:58:12.395659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 [2024-07-13 04:58:12.962091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 Null1 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 [2024-07-13 04:58:13.003583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 Null2 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 Null3 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 Null4 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.709 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:06.969 00:09:06.969 Discovery Log Number of Records 6, Generation counter 6 00:09:06.969 =====Discovery Log Entry 0====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: current discovery subsystem 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4420 00:09:06.969 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: explicit discovery connections, duplicate discovery information 00:09:06.969 sectype: none 00:09:06.969 =====Discovery Log Entry 1====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: nvme subsystem 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4420 00:09:06.969 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: none 00:09:06.969 sectype: none 00:09:06.969 =====Discovery Log Entry 2====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: nvme subsystem 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4420 00:09:06.969 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: none 00:09:06.969 sectype: none 00:09:06.969 =====Discovery Log Entry 3====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: nvme subsystem 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4420 00:09:06.969 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: none 00:09:06.969 sectype: none 00:09:06.969 =====Discovery Log Entry 4====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: nvme subsystem 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4420 00:09:06.969 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: none 00:09:06.969 sectype: none 00:09:06.969 =====Discovery Log Entry 5====== 00:09:06.969 trtype: tcp 00:09:06.969 adrfam: ipv4 00:09:06.969 subtype: discovery subsystem referral 00:09:06.969 treq: not required 00:09:06.969 portid: 0 00:09:06.969 trsvcid: 4430 00:09:06.969 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.969 traddr: 10.0.0.2 00:09:06.969 eflags: none 00:09:06.969 sectype: none 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:06.969 Perform nvmf subsystem discovery via RPC 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.969 [ 00:09:06.969 { 00:09:06.969 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:06.969 "subtype": "Discovery", 00:09:06.969 "listen_addresses": [ 00:09:06.969 { 00:09:06.969 "trtype": "TCP", 00:09:06.969 "adrfam": "IPv4", 00:09:06.969 "traddr": "10.0.0.2", 00:09:06.969 "trsvcid": "4420" 00:09:06.969 } 00:09:06.969 ], 00:09:06.969 "allow_any_host": true, 00:09:06.969 "hosts": [] 00:09:06.969 }, 00:09:06.969 { 00:09:06.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.969 "subtype": "NVMe", 00:09:06.969 "listen_addresses": [ 00:09:06.969 { 00:09:06.969 "trtype": "TCP", 00:09:06.969 "adrfam": "IPv4", 00:09:06.969 "traddr": "10.0.0.2", 00:09:06.969 "trsvcid": "4420" 00:09:06.969 } 00:09:06.969 ], 00:09:06.969 "allow_any_host": true, 00:09:06.969 "hosts": [], 00:09:06.969 "serial_number": "SPDK00000000000001", 00:09:06.969 "model_number": "SPDK bdev Controller", 00:09:06.969 "max_namespaces": 32, 00:09:06.969 "min_cntlid": 1, 00:09:06.969 "max_cntlid": 65519, 00:09:06.969 "namespaces": [ 00:09:06.969 { 00:09:06.969 "nsid": 1, 00:09:06.969 "bdev_name": "Null1", 00:09:06.969 "name": "Null1", 00:09:06.969 "nguid": "939866FF46694AE1ACE11F3317CEC05E", 00:09:06.969 "uuid": "939866ff-4669-4ae1-ace1-1f3317cec05e" 00:09:06.969 } 00:09:06.969 ] 00:09:06.969 }, 00:09:06.969 { 00:09:06.969 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:06.969 "subtype": "NVMe", 00:09:06.969 "listen_addresses": [ 00:09:06.969 { 00:09:06.969 "trtype": "TCP", 00:09:06.969 "adrfam": "IPv4", 00:09:06.969 "traddr": "10.0.0.2", 00:09:06.969 "trsvcid": "4420" 00:09:06.969 } 00:09:06.969 ], 00:09:06.969 "allow_any_host": true, 00:09:06.969 "hosts": [], 00:09:06.969 "serial_number": "SPDK00000000000002", 00:09:06.969 "model_number": "SPDK bdev Controller", 00:09:06.969 "max_namespaces": 32, 00:09:06.969 "min_cntlid": 1, 00:09:06.969 "max_cntlid": 65519, 00:09:06.969 "namespaces": [ 00:09:06.969 { 00:09:06.969 "nsid": 1, 00:09:06.969 "bdev_name": "Null2", 00:09:06.969 "name": "Null2", 00:09:06.969 "nguid": "888E19030175442DA211D2AE431AC393", 00:09:06.969 "uuid": "888e1903-0175-442d-a211-d2ae431ac393" 00:09:06.969 } 00:09:06.969 ] 00:09:06.969 }, 00:09:06.969 { 00:09:06.969 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:06.969 "subtype": "NVMe", 00:09:06.969 "listen_addresses": [ 00:09:06.969 { 00:09:06.969 "trtype": "TCP", 00:09:06.969 "adrfam": "IPv4", 00:09:06.969 "traddr": "10.0.0.2", 00:09:06.969 "trsvcid": "4420" 00:09:06.969 } 00:09:06.969 ], 00:09:06.969 "allow_any_host": true, 00:09:06.969 "hosts": [], 00:09:06.969 "serial_number": "SPDK00000000000003", 00:09:06.969 "model_number": "SPDK bdev Controller", 00:09:06.969 "max_namespaces": 32, 00:09:06.969 "min_cntlid": 1, 00:09:06.969 "max_cntlid": 65519, 00:09:06.969 "namespaces": [ 00:09:06.969 { 00:09:06.969 "nsid": 1, 00:09:06.969 "bdev_name": "Null3", 00:09:06.969 "name": "Null3", 00:09:06.969 "nguid": "8DA5D61875C746DFA30E6D253AD706AD", 00:09:06.969 "uuid": "8da5d618-75c7-46df-a30e-6d253ad706ad" 00:09:06.969 } 00:09:06.969 ] 00:09:06.969 }, 00:09:06.969 { 00:09:06.969 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:06.969 "subtype": "NVMe", 00:09:06.969 "listen_addresses": [ 00:09:06.969 { 00:09:06.969 "trtype": "TCP", 00:09:06.969 "adrfam": "IPv4", 00:09:06.969 "traddr": "10.0.0.2", 00:09:06.969 "trsvcid": "4420" 00:09:06.969 } 00:09:06.969 ], 00:09:06.969 "allow_any_host": true, 00:09:06.969 "hosts": [], 00:09:06.969 "serial_number": "SPDK00000000000004", 00:09:06.969 "model_number": "SPDK bdev Controller", 00:09:06.969 "max_namespaces": 32, 00:09:06.969 "min_cntlid": 1, 00:09:06.969 "max_cntlid": 65519, 00:09:06.969 "namespaces": [ 00:09:06.969 { 00:09:06.969 "nsid": 1, 00:09:06.969 "bdev_name": "Null4", 00:09:06.969 "name": "Null4", 00:09:06.969 "nguid": "1AB4719BC6C34720B448C7E9C4DC099B", 00:09:06.969 "uuid": "1ab4719b-c6c3-4720-b448-c7e9c4dc099b" 00:09:06.969 } 00:09:06.969 ] 00:09:06.969 } 00:09:06.969 ] 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.969 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.970 rmmod nvme_tcp 00:09:06.970 rmmod nvme_fabrics 00:09:06.970 rmmod nvme_keyring 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 592561 ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 592561 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 592561 ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 592561 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 592561 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 592561' 00:09:06.970 killing process with pid 592561 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 592561 00:09:06.970 04:58:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 592561 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.351 04:58:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.888 04:58:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.888 00:09:10.888 real 0m7.104s 00:09:10.888 user 0m8.842s 00:09:10.888 sys 0m1.992s 00:09:10.888 04:58:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.888 04:58:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:10.888 ************************************ 00:09:10.888 END TEST nvmf_target_discovery 00:09:10.888 ************************************ 00:09:10.888 04:58:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.888 04:58:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:10.888 04:58:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.888 04:58:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.888 04:58:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.888 ************************************ 00:09:10.888 START TEST nvmf_referrals 00:09:10.888 ************************************ 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:10.888 * Looking for test storage... 00:09:10.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.888 04:58:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.889 04:58:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:12.796 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:12.796 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.796 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:12.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:12.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:09:12.797 00:09:12.797 --- 10.0.0.2 ping statistics --- 00:09:12.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.797 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:12.797 00:09:12.797 --- 10.0.0.1 ping statistics --- 00:09:12.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.797 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=594795 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 594795 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 594795 ']' 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.797 04:58:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.797 [2024-07-13 04:58:19.042161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:12.797 [2024-07-13 04:58:19.042297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.797 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.797 [2024-07-13 04:58:19.186878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.057 [2024-07-13 04:58:19.454169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.057 [2024-07-13 04:58:19.454256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.057 [2024-07-13 04:58:19.454284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.057 [2024-07-13 04:58:19.454304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.057 [2024-07-13 04:58:19.454325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.057 [2024-07-13 04:58:19.454449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.057 [2024-07-13 04:58:19.454509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.057 [2024-07-13 04:58:19.454538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.057 [2024-07-13 04:58:19.454552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.624 [2024-07-13 04:58:19.987061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.624 04:58:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.624 [2024-07-13 04:58:20.000476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.624 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:13.625 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:13.883 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:14.143 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.402 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:14.659 04:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:14.659 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.917 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:15.177 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.438 rmmod nvme_tcp 00:09:15.438 rmmod nvme_fabrics 00:09:15.438 rmmod nvme_keyring 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 594795 ']' 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 594795 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 594795 ']' 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 594795 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594795 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594795' 00:09:15.438 killing process with pid 594795 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 594795 00:09:15.438 04:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 594795 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.816 04:58:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.721 04:58:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.721 00:09:18.721 real 0m8.292s 00:09:18.721 user 0m14.391s 00:09:18.721 sys 0m2.330s 00:09:18.721 04:58:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.721 04:58:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.721 ************************************ 00:09:18.721 END TEST nvmf_referrals 00:09:18.721 ************************************ 00:09:18.721 04:58:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.721 04:58:25 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:18.721 04:58:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.721 04:58:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.721 04:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.721 ************************************ 00:09:18.721 START TEST nvmf_connect_disconnect 00:09:18.721 ************************************ 00:09:18.721 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:18.984 * Looking for test storage... 00:09:18.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.984 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.985 04:58:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:20.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:20.893 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.893 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:20.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:20.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:09:20.894 00:09:20.894 --- 10.0.0.2 ping statistics --- 00:09:20.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.894 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:09:20.894 00:09:20.894 --- 10.0.0.1 ping statistics --- 00:09:20.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.894 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.894 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.153 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:21.153 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.153 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.153 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=597339 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 597339 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 597339 ']' 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.154 04:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.154 [2024-07-13 04:58:27.504374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.154 [2024-07-13 04:58:27.504512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.154 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.154 [2024-07-13 04:58:27.640790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.412 [2024-07-13 04:58:27.885766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.412 [2024-07-13 04:58:27.885852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.412 [2024-07-13 04:58:27.885891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.412 [2024-07-13 04:58:27.885913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.412 [2024-07-13 04:58:27.885937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.412 [2024-07-13 04:58:27.886056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.412 [2024-07-13 04:58:27.886116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.412 [2024-07-13 04:58:27.886163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.412 [2024-07-13 04:58:27.886175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.979 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.979 [2024-07-13 04:58:28.476227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.237 [2024-07-13 04:58:28.590614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:22.237 04:58:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:24.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.095 rmmod nvme_tcp 00:13:16.095 rmmod nvme_fabrics 00:13:16.095 rmmod nvme_keyring 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 597339 ']' 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 597339 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 597339 ']' 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 597339 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597339 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597339' 00:13:16.095 killing process with pid 597339 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 597339 00:13:16.095 05:02:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 597339 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.997 05:02:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.905 05:02:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.905 00:13:19.905 real 4m0.861s 00:13:19.905 user 15m10.414s 00:13:19.905 sys 0m37.678s 00:13:19.905 05:02:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.905 05:02:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:19.905 ************************************ 00:13:19.905 END TEST nvmf_connect_disconnect 00:13:19.905 ************************************ 00:13:19.905 05:02:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.905 05:02:26 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:19.905 05:02:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.905 05:02:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.905 05:02:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.905 ************************************ 00:13:19.905 START TEST nvmf_multitarget 00:13:19.905 ************************************ 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:19.905 * Looking for test storage... 00:13:19.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.905 05:02:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:21.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:21.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.817 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:21.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:21.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:13:21.818 00:13:21.818 --- 10.0.0.2 ping statistics --- 00:13:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.818 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:13:21.818 00:13:21.818 --- 10.0.0.1 ping statistics --- 00:13:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.818 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=628877 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 628877 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 628877 ']' 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.818 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.077 05:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.077 [2024-07-13 05:02:28.406884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:22.077 [2024-07-13 05:02:28.407054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.077 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.077 [2024-07-13 05:02:28.545432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.336 [2024-07-13 05:02:28.777924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.336 [2024-07-13 05:02:28.777991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.336 [2024-07-13 05:02:28.778030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.336 [2024-07-13 05:02:28.778048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.336 [2024-07-13 05:02:28.778067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.336 [2024-07-13 05:02:28.778190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.336 [2024-07-13 05:02:28.778250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.336 [2024-07-13 05:02:28.778292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.336 [2024-07-13 05:02:28.778303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.900 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:23.159 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:23.159 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:23.159 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:23.159 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:23.159 "nvmf_tgt_1" 00:13:23.159 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:23.415 "nvmf_tgt_2" 00:13:23.415 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:23.415 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:23.416 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:23.416 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:23.673 true 00:13:23.673 05:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:23.673 true 00:13:23.673 05:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:23.673 05:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.930 rmmod nvme_tcp 00:13:23.930 rmmod nvme_fabrics 00:13:23.930 rmmod nvme_keyring 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 628877 ']' 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 628877 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 628877 ']' 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 628877 00:13:23.930 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 628877 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 628877' 00:13:23.931 killing process with pid 628877 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 628877 00:13:23.931 05:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 628877 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.314 05:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.219 05:02:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.219 00:13:27.219 real 0m7.467s 00:13:27.219 user 0m11.562s 00:13:27.219 sys 0m2.100s 00:13:27.219 05:02:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.219 05:02:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 ************************************ 00:13:27.220 END TEST nvmf_multitarget 00:13:27.220 ************************************ 00:13:27.220 05:02:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.220 05:02:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:27.220 05:02:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.220 05:02:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.220 05:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.220 ************************************ 00:13:27.220 START TEST nvmf_rpc 00:13:27.220 ************************************ 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:27.220 * Looking for test storage... 00:13:27.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.220 05:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:29.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:29.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:29.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:29.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.750 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:29.750 00:13:29.750 --- 10.0.0.2 ping statistics --- 00:13:29.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.750 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:13:29.751 00:13:29.751 --- 10.0.0.1 ping statistics --- 00:13:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.751 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.751 05:02:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=631241 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 631241 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 631241 ']' 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.751 05:02:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.751 [2024-07-13 05:02:36.092661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:29.751 [2024-07-13 05:02:36.092789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.751 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.010 [2024-07-13 05:02:36.259267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.269 [2024-07-13 05:02:36.540252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.269 [2024-07-13 05:02:36.540350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.269 [2024-07-13 05:02:36.540380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.269 [2024-07-13 05:02:36.540402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.269 [2024-07-13 05:02:36.540424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.269 [2024-07-13 05:02:36.540551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.269 [2024-07-13 05:02:36.540610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.269 [2024-07-13 05:02:36.540655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.269 [2024-07-13 05:02:36.540665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.834 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:30.834 "tick_rate": 2700000000, 00:13:30.834 "poll_groups": [ 00:13:30.834 { 00:13:30.834 "name": "nvmf_tgt_poll_group_000", 00:13:30.834 "admin_qpairs": 0, 00:13:30.834 "io_qpairs": 0, 00:13:30.834 "current_admin_qpairs": 0, 00:13:30.834 "current_io_qpairs": 0, 00:13:30.834 "pending_bdev_io": 0, 00:13:30.834 "completed_nvme_io": 0, 00:13:30.834 "transports": [] 00:13:30.834 }, 00:13:30.834 { 00:13:30.834 "name": "nvmf_tgt_poll_group_001", 00:13:30.834 "admin_qpairs": 0, 00:13:30.834 "io_qpairs": 0, 00:13:30.834 "current_admin_qpairs": 0, 00:13:30.834 "current_io_qpairs": 0, 00:13:30.834 "pending_bdev_io": 0, 00:13:30.834 "completed_nvme_io": 0, 00:13:30.834 "transports": [] 00:13:30.834 }, 00:13:30.834 { 00:13:30.834 "name": "nvmf_tgt_poll_group_002", 00:13:30.834 "admin_qpairs": 0, 00:13:30.834 "io_qpairs": 0, 00:13:30.834 "current_admin_qpairs": 0, 00:13:30.834 "current_io_qpairs": 0, 00:13:30.834 "pending_bdev_io": 0, 00:13:30.834 "completed_nvme_io": 0, 00:13:30.834 "transports": [] 00:13:30.835 }, 00:13:30.835 { 00:13:30.835 "name": "nvmf_tgt_poll_group_003", 00:13:30.835 "admin_qpairs": 0, 00:13:30.835 "io_qpairs": 0, 00:13:30.835 "current_admin_qpairs": 0, 00:13:30.835 "current_io_qpairs": 0, 00:13:30.835 "pending_bdev_io": 0, 00:13:30.835 "completed_nvme_io": 0, 00:13:30.835 "transports": [] 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 }' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.835 [2024-07-13 05:02:37.274670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:30.835 "tick_rate": 2700000000, 00:13:30.835 "poll_groups": [ 00:13:30.835 { 00:13:30.835 "name": "nvmf_tgt_poll_group_000", 00:13:30.835 "admin_qpairs": 0, 00:13:30.835 "io_qpairs": 0, 00:13:30.835 "current_admin_qpairs": 0, 00:13:30.835 "current_io_qpairs": 0, 00:13:30.835 "pending_bdev_io": 0, 00:13:30.835 "completed_nvme_io": 0, 00:13:30.835 "transports": [ 00:13:30.835 { 00:13:30.835 "trtype": "TCP" 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 }, 00:13:30.835 { 00:13:30.835 "name": "nvmf_tgt_poll_group_001", 00:13:30.835 "admin_qpairs": 0, 00:13:30.835 "io_qpairs": 0, 00:13:30.835 "current_admin_qpairs": 0, 00:13:30.835 "current_io_qpairs": 0, 00:13:30.835 "pending_bdev_io": 0, 00:13:30.835 "completed_nvme_io": 0, 00:13:30.835 "transports": [ 00:13:30.835 { 00:13:30.835 "trtype": "TCP" 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 }, 00:13:30.835 { 00:13:30.835 "name": "nvmf_tgt_poll_group_002", 00:13:30.835 "admin_qpairs": 0, 00:13:30.835 "io_qpairs": 0, 00:13:30.835 "current_admin_qpairs": 0, 00:13:30.835 "current_io_qpairs": 0, 00:13:30.835 "pending_bdev_io": 0, 00:13:30.835 "completed_nvme_io": 0, 00:13:30.835 "transports": [ 00:13:30.835 { 00:13:30.835 "trtype": "TCP" 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 }, 00:13:30.835 { 00:13:30.835 "name": "nvmf_tgt_poll_group_003", 00:13:30.835 "admin_qpairs": 0, 00:13:30.835 "io_qpairs": 0, 00:13:30.835 "current_admin_qpairs": 0, 00:13:30.835 "current_io_qpairs": 0, 00:13:30.835 "pending_bdev_io": 0, 00:13:30.835 "completed_nvme_io": 0, 00:13:30.835 "transports": [ 00:13:30.835 { 00:13:30.835 "trtype": "TCP" 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 } 00:13:30.835 ] 00:13:30.835 }' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:30.835 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 Malloc1 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 [2024-07-13 05:02:37.489660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:31.092 [2024-07-13 05:02:37.512728] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:31.092 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:31.092 could not add new controller: failed to write to nvme-fabrics device 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.092 05:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.026 05:02:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.026 05:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.026 05:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.026 05:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:32.026 05:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:33.926 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.926 [2024-07-13 05:02:40.413110] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:34.202 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:34.202 could not add new controller: failed to write to nvme-fabrics device 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.202 05:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.788 05:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.788 05:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:34.788 05:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.788 05:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:34.788 05:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:36.690 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.946 [2024-07-13 05:02:43.306402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.946 05:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.879 05:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.879 05:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.879 05:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.879 05:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:37.879 05:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.780 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 [2024-07-13 05:02:46.189393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.781 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.716 05:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.716 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.716 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.716 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:40.716 05:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:42.621 05:02:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.621 [2024-07-13 05:02:49.113525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.621 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.879 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.445 05:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:43.445 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.445 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.445 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:43.445 05:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:45.348 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 [2024-07-13 05:02:51.965257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.609 05:02:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.177 05:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.177 05:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.177 05:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.177 05:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.177 05:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 [2024-07-13 05:02:54.895471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 05:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.277 05:02:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.277 05:02:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:49.277 05:02:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.277 05:02:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:49.277 05:02:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:51.186 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.445 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 [2024-07-13 05:02:57.758620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 [2024-07-13 05:02:57.806650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 [2024-07-13 05:02:57.854790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 [2024-07-13 05:02:57.902983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.446 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 [2024-07-13 05:02:57.951224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.705 05:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:51.705 "tick_rate": 2700000000, 00:13:51.705 "poll_groups": [ 00:13:51.705 { 00:13:51.705 "name": "nvmf_tgt_poll_group_000", 00:13:51.705 "admin_qpairs": 2, 00:13:51.705 "io_qpairs": 84, 00:13:51.705 "current_admin_qpairs": 0, 00:13:51.705 "current_io_qpairs": 0, 00:13:51.705 "pending_bdev_io": 0, 00:13:51.705 "completed_nvme_io": 226, 00:13:51.705 "transports": [ 00:13:51.705 { 00:13:51.705 "trtype": "TCP" 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 }, 00:13:51.705 { 00:13:51.705 "name": "nvmf_tgt_poll_group_001", 00:13:51.705 "admin_qpairs": 2, 00:13:51.705 "io_qpairs": 84, 00:13:51.705 "current_admin_qpairs": 0, 00:13:51.705 "current_io_qpairs": 0, 00:13:51.705 "pending_bdev_io": 0, 00:13:51.705 "completed_nvme_io": 134, 00:13:51.705 "transports": [ 00:13:51.705 { 00:13:51.705 "trtype": "TCP" 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 }, 00:13:51.705 { 00:13:51.705 "name": "nvmf_tgt_poll_group_002", 00:13:51.705 "admin_qpairs": 1, 00:13:51.705 "io_qpairs": 84, 00:13:51.705 "current_admin_qpairs": 0, 00:13:51.705 "current_io_qpairs": 0, 00:13:51.705 "pending_bdev_io": 0, 00:13:51.705 "completed_nvme_io": 183, 00:13:51.705 "transports": [ 00:13:51.705 { 00:13:51.705 "trtype": "TCP" 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 }, 00:13:51.705 { 00:13:51.705 "name": "nvmf_tgt_poll_group_003", 00:13:51.705 "admin_qpairs": 2, 00:13:51.705 "io_qpairs": 84, 00:13:51.705 "current_admin_qpairs": 0, 00:13:51.705 "current_io_qpairs": 0, 00:13:51.705 "pending_bdev_io": 0, 00:13:51.705 "completed_nvme_io": 143, 00:13:51.705 "transports": [ 00:13:51.705 { 00:13:51.705 "trtype": "TCP" 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 }' 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:51.705 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.706 rmmod nvme_tcp 00:13:51.706 rmmod nvme_fabrics 00:13:51.706 rmmod nvme_keyring 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 631241 ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 631241 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 631241 ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 631241 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 631241 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 631241' 00:13:51.706 killing process with pid 631241 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 631241 00:13:51.706 05:02:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 631241 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.612 05:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.534 05:03:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.534 00:13:55.534 real 0m28.071s 00:13:55.534 user 1m30.006s 00:13:55.534 sys 0m4.662s 00:13:55.534 05:03:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.534 05:03:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.534 ************************************ 00:13:55.534 END TEST nvmf_rpc 00:13:55.534 ************************************ 00:13:55.534 05:03:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.534 05:03:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:55.534 05:03:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.534 05:03:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.534 05:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.534 ************************************ 00:13:55.534 START TEST nvmf_invalid 00:13:55.534 ************************************ 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:55.534 * Looking for test storage... 00:13:55.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.534 05:03:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:57.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:57.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:57.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:57.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:13:57.454 00:13:57.454 --- 10.0.0.2 ping statistics --- 00:13:57.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.454 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:57.454 00:13:57.454 --- 10.0.0.1 ping statistics --- 00:13:57.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.454 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.454 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=636218 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 636218 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 636218 ']' 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.712 05:03:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:57.712 [2024-07-13 05:03:04.078284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:57.712 [2024-07-13 05:03:04.078412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.971 [2024-07-13 05:03:04.215837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.971 [2024-07-13 05:03:04.470780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.972 [2024-07-13 05:03:04.470861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.972 [2024-07-13 05:03:04.470902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.972 [2024-07-13 05:03:04.470924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.972 [2024-07-13 05:03:04.470945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.972 [2024-07-13 05:03:04.471039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.972 [2024-07-13 05:03:04.471096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.972 [2024-07-13 05:03:04.471152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.972 [2024-07-13 05:03:04.471163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25932 00:13:58.910 [2024-07-13 05:03:05.317905] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:58.910 { 00:13:58.910 "nqn": "nqn.2016-06.io.spdk:cnode25932", 00:13:58.910 "tgt_name": "foobar", 00:13:58.910 "method": "nvmf_create_subsystem", 00:13:58.910 "req_id": 1 00:13:58.910 } 00:13:58.910 Got JSON-RPC error response 00:13:58.910 response: 00:13:58.910 { 00:13:58.910 "code": -32603, 00:13:58.910 "message": "Unable to find target foobar" 00:13:58.910 }' 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:58.910 { 00:13:58.910 "nqn": "nqn.2016-06.io.spdk:cnode25932", 00:13:58.910 "tgt_name": "foobar", 00:13:58.910 "method": "nvmf_create_subsystem", 00:13:58.910 "req_id": 1 00:13:58.910 } 00:13:58.910 Got JSON-RPC error response 00:13:58.910 response: 00:13:58.910 { 00:13:58.910 "code": -32603, 00:13:58.910 "message": "Unable to find target foobar" 00:13:58.910 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:58.910 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4776 00:13:59.167 [2024-07-13 05:03:05.606973] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4776: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:59.167 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:59.167 { 00:13:59.167 "nqn": "nqn.2016-06.io.spdk:cnode4776", 00:13:59.167 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:59.167 "method": "nvmf_create_subsystem", 00:13:59.168 "req_id": 1 00:13:59.168 } 00:13:59.168 Got JSON-RPC error response 00:13:59.168 response: 00:13:59.168 { 00:13:59.168 "code": -32602, 00:13:59.168 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:59.168 }' 00:13:59.168 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:59.168 { 00:13:59.168 "nqn": "nqn.2016-06.io.spdk:cnode4776", 00:13:59.168 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:59.168 "method": "nvmf_create_subsystem", 00:13:59.168 "req_id": 1 00:13:59.168 } 00:13:59.168 Got JSON-RPC error response 00:13:59.168 response: 00:13:59.168 { 00:13:59.168 "code": -32602, 00:13:59.168 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:59.168 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:59.168 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:59.168 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11084 00:13:59.426 [2024-07-13 05:03:05.851734] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11084: invalid model number 'SPDK_Controller' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:59.426 { 00:13:59.426 "nqn": "nqn.2016-06.io.spdk:cnode11084", 00:13:59.426 "model_number": "SPDK_Controller\u001f", 00:13:59.426 "method": "nvmf_create_subsystem", 00:13:59.426 "req_id": 1 00:13:59.426 } 00:13:59.426 Got JSON-RPC error response 00:13:59.426 response: 00:13:59.426 { 00:13:59.426 "code": -32602, 00:13:59.426 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.426 }' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:59.426 { 00:13:59.426 "nqn": "nqn.2016-06.io.spdk:cnode11084", 00:13:59.426 "model_number": "SPDK_Controller\u001f", 00:13:59.426 "method": "nvmf_create_subsystem", 00:13:59.426 "req_id": 1 00:13:59.426 } 00:13:59.426 Got JSON-RPC error response 00:13:59.426 response: 00:13:59.426 { 00:13:59.426 "code": -32602, 00:13:59.426 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.426 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:59.426 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Tvj{K{lyna(Smei[5}6Jr' 00:13:59.686 05:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Tvj{K{lyna(Smei[5}6Jr' nqn.2016-06.io.spdk:cnode31909 00:13:59.686 [2024-07-13 05:03:06.164872] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31909: invalid serial number 'Tvj{K{lyna(Smei[5}6Jr' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:59.945 { 00:13:59.945 "nqn": "nqn.2016-06.io.spdk:cnode31909", 00:13:59.945 "serial_number": "Tvj{K{lyna(Smei[5}6Jr", 00:13:59.945 "method": "nvmf_create_subsystem", 00:13:59.945 "req_id": 1 00:13:59.945 } 00:13:59.945 Got JSON-RPC error response 00:13:59.945 response: 00:13:59.945 { 00:13:59.945 "code": -32602, 00:13:59.945 "message": "Invalid SN Tvj{K{lyna(Smei[5}6Jr" 00:13:59.945 }' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:59.945 { 00:13:59.945 "nqn": "nqn.2016-06.io.spdk:cnode31909", 00:13:59.945 "serial_number": "Tvj{K{lyna(Smei[5}6Jr", 00:13:59.945 "method": "nvmf_create_subsystem", 00:13:59.945 "req_id": 1 00:13:59.945 } 00:13:59.945 Got JSON-RPC error response 00:13:59.945 response: 00:13:59.945 { 00:13:59.945 "code": -32602, 00:13:59.945 "message": "Invalid SN Tvj{K{lyna(Smei[5}6Jr" 00:13:59.945 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:59.945 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '?BJIzE.{euDY&Z|'\'',j3|]3dXTY-_1/M7,v[cWwUJ;' 00:13:59.946 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '?BJIzE.{euDY&Z|'\'',j3|]3dXTY-_1/M7,v[cWwUJ;' nqn.2016-06.io.spdk:cnode11346 00:14:00.203 [2024-07-13 05:03:06.566225] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11346: invalid model number '?BJIzE.{euDY&Z|',j3|]3dXTY-_1/M7,v[cWwUJ;' 00:14:00.203 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:00.203 { 00:14:00.203 "nqn": "nqn.2016-06.io.spdk:cnode11346", 00:14:00.203 "model_number": "?BJIzE.{euDY&Z|'\'',j3|]3dXTY-_1/M7,v[cWwUJ;", 00:14:00.203 "method": "nvmf_create_subsystem", 00:14:00.203 "req_id": 1 00:14:00.203 } 00:14:00.203 Got JSON-RPC error response 00:14:00.203 response: 00:14:00.203 { 00:14:00.203 "code": -32602, 00:14:00.203 "message": "Invalid MN ?BJIzE.{euDY&Z|'\'',j3|]3dXTY-_1/M7,v[cWwUJ;" 00:14:00.203 }' 00:14:00.203 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:00.203 { 00:14:00.203 "nqn": "nqn.2016-06.io.spdk:cnode11346", 00:14:00.203 "model_number": "?BJIzE.{euDY&Z|',j3|]3dXTY-_1/M7,v[cWwUJ;", 00:14:00.203 "method": "nvmf_create_subsystem", 00:14:00.203 "req_id": 1 00:14:00.203 } 00:14:00.203 Got JSON-RPC error response 00:14:00.203 response: 00:14:00.203 { 00:14:00.203 "code": -32602, 00:14:00.203 "message": "Invalid MN ?BJIzE.{euDY&Z|',j3|]3dXTY-_1/M7,v[cWwUJ;" 00:14:00.203 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:00.203 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:00.461 [2024-07-13 05:03:06.811120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.461 05:03:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:00.719 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:00.719 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:00.719 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:00.719 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:00.719 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:00.977 [2024-07-13 05:03:07.346194] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:00.977 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:00.977 { 00:14:00.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.977 "listen_address": { 00:14:00.977 "trtype": "tcp", 00:14:00.977 "traddr": "", 00:14:00.977 "trsvcid": "4421" 00:14:00.977 }, 00:14:00.977 "method": "nvmf_subsystem_remove_listener", 00:14:00.977 "req_id": 1 00:14:00.977 } 00:14:00.977 Got JSON-RPC error response 00:14:00.977 response: 00:14:00.977 { 00:14:00.977 "code": -32602, 00:14:00.977 "message": "Invalid parameters" 00:14:00.977 }' 00:14:00.977 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:00.977 { 00:14:00.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.977 "listen_address": { 00:14:00.977 "trtype": "tcp", 00:14:00.977 "traddr": "", 00:14:00.977 "trsvcid": "4421" 00:14:00.977 }, 00:14:00.977 "method": "nvmf_subsystem_remove_listener", 00:14:00.977 "req_id": 1 00:14:00.977 } 00:14:00.977 Got JSON-RPC error response 00:14:00.977 response: 00:14:00.977 { 00:14:00.977 "code": -32602, 00:14:00.978 "message": "Invalid parameters" 00:14:00.978 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:00.978 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30260 -i 0 00:14:01.236 [2024-07-13 05:03:07.607051] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30260: invalid cntlid range [0-65519] 00:14:01.236 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:01.236 { 00:14:01.236 "nqn": "nqn.2016-06.io.spdk:cnode30260", 00:14:01.236 "min_cntlid": 0, 00:14:01.236 "method": "nvmf_create_subsystem", 00:14:01.236 "req_id": 1 00:14:01.236 } 00:14:01.236 Got JSON-RPC error response 00:14:01.236 response: 00:14:01.236 { 00:14:01.236 "code": -32602, 00:14:01.236 "message": "Invalid cntlid range [0-65519]" 00:14:01.236 }' 00:14:01.236 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:01.236 { 00:14:01.236 "nqn": "nqn.2016-06.io.spdk:cnode30260", 00:14:01.236 "min_cntlid": 0, 00:14:01.236 "method": "nvmf_create_subsystem", 00:14:01.236 "req_id": 1 00:14:01.236 } 00:14:01.236 Got JSON-RPC error response 00:14:01.236 response: 00:14:01.236 { 00:14:01.236 "code": -32602, 00:14:01.236 "message": "Invalid cntlid range [0-65519]" 00:14:01.236 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.236 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9551 -i 65520 00:14:01.494 [2024-07-13 05:03:07.863947] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9551: invalid cntlid range [65520-65519] 00:14:01.494 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:01.494 { 00:14:01.494 "nqn": "nqn.2016-06.io.spdk:cnode9551", 00:14:01.494 "min_cntlid": 65520, 00:14:01.494 "method": "nvmf_create_subsystem", 00:14:01.494 "req_id": 1 00:14:01.494 } 00:14:01.494 Got JSON-RPC error response 00:14:01.494 response: 00:14:01.494 { 00:14:01.494 "code": -32602, 00:14:01.494 "message": "Invalid cntlid range [65520-65519]" 00:14:01.494 }' 00:14:01.494 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:01.494 { 00:14:01.494 "nqn": "nqn.2016-06.io.spdk:cnode9551", 00:14:01.494 "min_cntlid": 65520, 00:14:01.494 "method": "nvmf_create_subsystem", 00:14:01.494 "req_id": 1 00:14:01.494 } 00:14:01.494 Got JSON-RPC error response 00:14:01.494 response: 00:14:01.494 { 00:14:01.494 "code": -32602, 00:14:01.494 "message": "Invalid cntlid range [65520-65519]" 00:14:01.494 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.494 05:03:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19548 -I 0 00:14:01.753 [2024-07-13 05:03:08.124771] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19548: invalid cntlid range [1-0] 00:14:01.753 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:01.753 { 00:14:01.753 "nqn": "nqn.2016-06.io.spdk:cnode19548", 00:14:01.753 "max_cntlid": 0, 00:14:01.753 "method": "nvmf_create_subsystem", 00:14:01.753 "req_id": 1 00:14:01.753 } 00:14:01.753 Got JSON-RPC error response 00:14:01.753 response: 00:14:01.753 { 00:14:01.753 "code": -32602, 00:14:01.753 "message": "Invalid cntlid range [1-0]" 00:14:01.753 }' 00:14:01.753 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:01.753 { 00:14:01.753 "nqn": "nqn.2016-06.io.spdk:cnode19548", 00:14:01.753 "max_cntlid": 0, 00:14:01.753 "method": "nvmf_create_subsystem", 00:14:01.753 "req_id": 1 00:14:01.753 } 00:14:01.753 Got JSON-RPC error response 00:14:01.753 response: 00:14:01.753 { 00:14:01.753 "code": -32602, 00:14:01.753 "message": "Invalid cntlid range [1-0]" 00:14:01.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.753 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11098 -I 65520 00:14:02.011 [2024-07-13 05:03:08.373657] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11098: invalid cntlid range [1-65520] 00:14:02.011 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:02.011 { 00:14:02.011 "nqn": "nqn.2016-06.io.spdk:cnode11098", 00:14:02.011 "max_cntlid": 65520, 00:14:02.011 "method": "nvmf_create_subsystem", 00:14:02.011 "req_id": 1 00:14:02.011 } 00:14:02.011 Got JSON-RPC error response 00:14:02.011 response: 00:14:02.011 { 00:14:02.011 "code": -32602, 00:14:02.011 "message": "Invalid cntlid range [1-65520]" 00:14:02.011 }' 00:14:02.011 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:02.011 { 00:14:02.011 "nqn": "nqn.2016-06.io.spdk:cnode11098", 00:14:02.011 "max_cntlid": 65520, 00:14:02.011 "method": "nvmf_create_subsystem", 00:14:02.011 "req_id": 1 00:14:02.011 } 00:14:02.011 Got JSON-RPC error response 00:14:02.011 response: 00:14:02.011 { 00:14:02.011 "code": -32602, 00:14:02.011 "message": "Invalid cntlid range [1-65520]" 00:14:02.011 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.011 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5725 -i 6 -I 5 00:14:02.270 [2024-07-13 05:03:08.630536] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5725: invalid cntlid range [6-5] 00:14:02.270 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:02.270 { 00:14:02.270 "nqn": "nqn.2016-06.io.spdk:cnode5725", 00:14:02.270 "min_cntlid": 6, 00:14:02.270 "max_cntlid": 5, 00:14:02.270 "method": "nvmf_create_subsystem", 00:14:02.270 "req_id": 1 00:14:02.270 } 00:14:02.270 Got JSON-RPC error response 00:14:02.270 response: 00:14:02.270 { 00:14:02.271 "code": -32602, 00:14:02.271 "message": "Invalid cntlid range [6-5]" 00:14:02.271 }' 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:02.271 { 00:14:02.271 "nqn": "nqn.2016-06.io.spdk:cnode5725", 00:14:02.271 "min_cntlid": 6, 00:14:02.271 "max_cntlid": 5, 00:14:02.271 "method": "nvmf_create_subsystem", 00:14:02.271 "req_id": 1 00:14:02.271 } 00:14:02.271 Got JSON-RPC error response 00:14:02.271 response: 00:14:02.271 { 00:14:02.271 "code": -32602, 00:14:02.271 "message": "Invalid cntlid range [6-5]" 00:14:02.271 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:02.271 { 00:14:02.271 "name": "foobar", 00:14:02.271 "method": "nvmf_delete_target", 00:14:02.271 "req_id": 1 00:14:02.271 } 00:14:02.271 Got JSON-RPC error response 00:14:02.271 response: 00:14:02.271 { 00:14:02.271 "code": -32602, 00:14:02.271 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:02.271 }' 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:02.271 { 00:14:02.271 "name": "foobar", 00:14:02.271 "method": "nvmf_delete_target", 00:14:02.271 "req_id": 1 00:14:02.271 } 00:14:02.271 Got JSON-RPC error response 00:14:02.271 response: 00:14:02.271 { 00:14:02.271 "code": -32602, 00:14:02.271 "message": "The specified target doesn't exist, cannot delete it." 00:14:02.271 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.271 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.529 rmmod nvme_tcp 00:14:02.529 rmmod nvme_fabrics 00:14:02.529 rmmod nvme_keyring 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 636218 ']' 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 636218 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 636218 ']' 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 636218 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 636218 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 636218' 00:14:02.530 killing process with pid 636218 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 636218 00:14:02.530 05:03:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 636218 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.906 05:03:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.831 05:03:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.831 00:14:05.831 real 0m10.398s 00:14:05.831 user 0m25.276s 00:14:05.831 sys 0m2.627s 00:14:05.831 05:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.831 05:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 ************************************ 00:14:05.831 END TEST nvmf_invalid 00:14:05.831 ************************************ 00:14:05.831 05:03:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:05.831 05:03:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:05.831 05:03:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:05.831 05:03:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.831 05:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 ************************************ 00:14:05.831 START TEST nvmf_abort 00:14:05.831 ************************************ 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:05.831 * Looking for test storage... 00:14:05.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.831 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.832 05:03:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:07.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:07.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:07.738 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:07.739 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:07.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.739 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:14:07.998 00:14:07.998 --- 10.0.0.2 ping statistics --- 00:14:07.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.998 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:07.998 00:14:07.998 --- 10.0.0.1 ping statistics --- 00:14:07.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.998 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=639504 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 639504 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 639504 ']' 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.998 05:03:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:07.998 [2024-07-13 05:03:14.476402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:07.998 [2024-07-13 05:03:14.476544] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.258 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.258 [2024-07-13 05:03:14.610088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.517 [2024-07-13 05:03:14.838153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.517 [2024-07-13 05:03:14.838245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.517 [2024-07-13 05:03:14.838289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.517 [2024-07-13 05:03:14.838307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.517 [2024-07-13 05:03:14.838325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.517 [2024-07-13 05:03:14.841915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.517 [2024-07-13 05:03:14.842063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.517 [2024-07-13 05:03:14.842072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 [2024-07-13 05:03:15.457302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 Malloc0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 Delay0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.084 [2024-07-13 05:03:15.574715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.084 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.344 05:03:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.344 05:03:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:09.344 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.344 [2024-07-13 05:03:15.732376] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:11.876 Initializing NVMe Controllers 00:14:11.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:11.876 controller IO queue size 128 less than required 00:14:11.876 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:11.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:11.876 Initialization complete. Launching workers. 00:14:11.877 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26109 00:14:11.877 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26166, failed to submit 66 00:14:11.877 success 26109, unsuccess 57, failed 0 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.877 rmmod nvme_tcp 00:14:11.877 rmmod nvme_fabrics 00:14:11.877 rmmod nvme_keyring 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 639504 ']' 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 639504 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 639504 ']' 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 639504 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639504 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639504' 00:14:11.877 killing process with pid 639504 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 639504 00:14:11.877 05:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 639504 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.258 05:03:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.164 05:03:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.164 00:14:15.164 real 0m9.193s 00:14:15.164 user 0m15.120s 00:14:15.164 sys 0m2.634s 00:14:15.164 05:03:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.164 05:03:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:15.164 ************************************ 00:14:15.164 END TEST nvmf_abort 00:14:15.164 ************************************ 00:14:15.164 05:03:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.164 05:03:21 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:15.164 05:03:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.164 05:03:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.164 05:03:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.164 ************************************ 00:14:15.164 START TEST nvmf_ns_hotplug_stress 00:14:15.164 ************************************ 00:14:15.164 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:15.164 * Looking for test storage... 00:14:15.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.164 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.164 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:15.164 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.164 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.165 05:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.071 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:14:17.329 00:14:17.329 --- 10.0.0.2 ping statistics --- 00:14:17.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.329 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:14:17.329 00:14:17.329 --- 10.0.0.1 ping statistics --- 00:14:17.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.329 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=641985 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 641985 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 641985 ']' 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.329 05:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.329 [2024-07-13 05:03:23.773764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:17.329 [2024-07-13 05:03:23.773929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.589 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.589 [2024-07-13 05:03:23.907439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:17.848 [2024-07-13 05:03:24.144790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.848 [2024-07-13 05:03:24.144880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.848 [2024-07-13 05:03:24.144918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.848 [2024-07-13 05:03:24.144940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.848 [2024-07-13 05:03:24.144963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.848 [2024-07-13 05:03:24.145103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.848 [2024-07-13 05:03:24.145192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.849 [2024-07-13 05:03:24.145216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:18.414 05:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.672 [2024-07-13 05:03:24.973377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.672 05:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:18.929 05:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.186 [2024-07-13 05:03:25.496156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.186 05:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.444 05:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:19.702 Malloc0 00:14:19.702 05:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:19.961 Delay0 00:14:19.961 05:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.219 05:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:20.476 NULL1 00:14:20.476 05:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:20.734 05:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=642412 00:14:20.734 05:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:20.734 05:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:20.734 05:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.734 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.109 Read completed with error (sct=0, sc=11) 00:14:22.109 05:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.367 05:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:22.368 05:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:22.625 true 00:14:22.625 05:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:22.625 05:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.192 05:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.451 05:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:23.451 05:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:23.710 true 00:14:23.710 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:23.710 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.968 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.226 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:24.227 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:24.485 true 00:14:24.485 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:24.485 05:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.743 05:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.001 05:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:25.001 05:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:25.259 true 00:14:25.259 05:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:25.259 05:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.636 05:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.636 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:26.636 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:26.894 true 00:14:26.894 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:26.894 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.151 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.409 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:27.409 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:27.667 true 00:14:27.667 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:27.667 05:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.604 05:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.863 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:28.863 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:29.120 true 00:14:29.120 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:29.120 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.377 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.636 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:29.636 05:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:29.636 true 00:14:29.894 05:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:29.894 05:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.828 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.087 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:31.087 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:31.344 true 00:14:31.344 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:31.344 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.601 05:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.860 05:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:31.860 05:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:31.860 true 00:14:32.118 05:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:32.118 05:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.719 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.976 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:32.976 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:33.234 true 00:14:33.234 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:33.234 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.492 05:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.750 05:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:33.750 05:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:34.008 true 00:14:34.008 05:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:34.008 05:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.946 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.204 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:35.204 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:35.462 true 00:14:35.462 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:35.462 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.462 05:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.722 05:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:35.722 05:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:35.979 true 00:14:35.979 05:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:35.979 05:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.913 05:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.171 05:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:37.171 05:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:37.427 true 00:14:37.427 05:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:37.427 05:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.684 05:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.942 05:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:37.942 05:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:38.199 true 00:14:38.199 05:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:38.199 05:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.171 05:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.434 05:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:39.434 05:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:39.694 true 00:14:39.694 05:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:39.694 05:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.694 05:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.952 05:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:39.952 05:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:40.209 true 00:14:40.209 05:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:40.209 05:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.582 05:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.582 05:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:41.582 05:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:41.839 true 00:14:41.839 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:41.840 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.097 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.354 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:42.354 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:42.612 true 00:14:42.612 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:42.612 05:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.550 05:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.550 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:43.550 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:43.808 true 00:14:43.808 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:43.808 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.066 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.324 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:44.324 05:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:44.581 true 00:14:44.581 05:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:44.581 05:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.518 05:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.776 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:45.776 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:46.033 true 00:14:46.033 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:46.033 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.291 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.548 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:46.548 05:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:46.804 true 00:14:46.804 05:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:46.804 05:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.741 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.999 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:47.999 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:48.256 true 00:14:48.256 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:48.256 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.514 05:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.773 05:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:48.773 05:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:49.056 true 00:14:49.056 05:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:49.056 05:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.992 05:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:50.250 05:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:50.250 05:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:50.508 true 00:14:50.508 05:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:50.508 05:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.766 05:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.024 05:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:51.024 05:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:51.282 true 00:14:51.282 05:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:51.282 05:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.245 Initializing NVMe Controllers 00:14:52.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.245 Controller IO queue size 128, less than required. 00:14:52.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.245 Controller IO queue size 128, less than required. 00:14:52.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.245 Initialization complete. Launching workers. 00:14:52.245 ======================================================== 00:14:52.245 Latency(us) 00:14:52.245 Device Information : IOPS MiB/s Average min max 00:14:52.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 775.56 0.38 92926.64 3047.58 1109222.40 00:14:52.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8659.47 4.23 14781.00 3249.15 389865.64 00:14:52.245 ======================================================== 00:14:52.245 Total : 9435.03 4.61 21204.61 3047.58 1109222.40 00:14:52.245 00:14:52.245 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.245 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:52.245 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:52.503 true 00:14:52.503 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 642412 00:14:52.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (642412) - No such process 00:14:52.503 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 642412 00:14:52.503 05:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.761 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:53.018 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:53.018 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:53.019 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:53.019 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.019 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:53.276 null0 00:14:53.276 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.276 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.276 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:53.535 null1 00:14:53.535 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.535 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.535 05:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:53.793 null2 00:14:53.793 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.793 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.793 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:54.051 null3 00:14:54.051 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.051 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.051 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:54.309 null4 00:14:54.309 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.309 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.309 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:54.567 null5 00:14:54.567 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.567 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.567 05:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:54.826 null6 00:14:54.826 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.826 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.826 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:55.084 null7 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 646587 646588 646590 646592 646595 646597 646599 646601 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.084 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.341 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.341 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.341 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.341 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.341 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.342 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.342 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.342 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.600 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.600 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.600 05:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.600 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.601 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.859 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.118 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.376 05:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.634 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.959 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.218 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.476 05:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.735 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.993 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.250 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.508 05:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.765 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.023 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.281 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.540 05:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.798 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.056 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.315 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.575 05:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.575 rmmod nvme_tcp 00:15:00.575 rmmod nvme_fabrics 00:15:00.575 rmmod nvme_keyring 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 641985 ']' 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 641985 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 641985 ']' 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 641985 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 641985 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 641985' 00:15:00.575 killing process with pid 641985 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 641985 00:15:00.575 05:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 641985 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.954 05:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.858 05:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.117 00:15:04.117 real 0m48.932s 00:15:04.117 user 3m37.701s 00:15:04.117 sys 0m16.944s 00:15:04.117 05:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.117 05:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 ************************************ 00:15:04.117 END TEST nvmf_ns_hotplug_stress 00:15:04.117 ************************************ 00:15:04.117 05:04:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:04.117 05:04:10 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.117 05:04:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.117 05:04:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.117 05:04:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 ************************************ 00:15:04.117 START TEST nvmf_connect_stress 00:15:04.117 ************************************ 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.117 * Looking for test storage... 00:15:04.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.117 05:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:06.023 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:06.023 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:06.023 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:06.023 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.023 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:15:06.023 00:15:06.023 --- 10.0.0.2 ping statistics --- 00:15:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.024 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:06.024 00:15:06.024 --- 10.0.0.1 ping statistics --- 00:15:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.024 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=649468 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 649468 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 649468 ']' 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.024 05:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.283 [2024-07-13 05:04:12.577616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:06.283 [2024-07-13 05:04:12.577755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.283 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.283 [2024-07-13 05:04:12.708358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.542 [2024-07-13 05:04:12.935026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.542 [2024-07-13 05:04:12.935089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.542 [2024-07-13 05:04:12.935133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.542 [2024-07-13 05:04:12.935151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.542 [2024-07-13 05:04:12.935170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.542 [2024-07-13 05:04:12.935329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.542 [2024-07-13 05:04:12.935368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.542 [2024-07-13 05:04:12.935379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.107 [2024-07-13 05:04:13.521296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.107 [2024-07-13 05:04:13.560417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.107 NULL1 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=649625 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.107 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.367 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.367 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.626 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.626 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:07.626 05:04:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.626 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.626 05:04:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.882 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.883 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:07.883 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.883 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.883 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.142 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.143 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:08.143 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.143 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.143 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.712 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.712 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:08.712 05:04:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.712 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.712 05:04:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.973 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.973 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:08.973 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.973 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.973 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.238 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.238 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:09.238 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.238 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.238 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.514 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.514 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:09.514 05:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.514 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.514 05:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.772 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.772 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:09.772 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.772 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.772 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.340 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.340 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:10.340 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.340 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.340 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.598 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.598 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:10.598 05:04:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.598 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.598 05:04:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.856 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.856 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:10.856 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.856 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.856 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.115 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.115 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:11.115 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.115 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.115 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.374 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.374 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:11.374 05:04:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.374 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.374 05:04:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.944 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.944 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:11.944 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.944 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.944 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.203 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.203 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:12.203 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.203 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.203 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.461 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.461 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:12.461 05:04:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.461 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.461 05:04:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.721 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.721 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:12.721 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.721 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.721 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.980 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.980 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:12.980 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.980 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.980 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.545 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.545 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:13.545 05:04:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.545 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.545 05:04:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.803 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.803 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:13.803 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.803 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.803 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.064 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.064 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:14.064 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.064 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.064 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.322 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.322 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:14.323 05:04:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.323 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.323 05:04:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.582 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.582 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:14.582 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.582 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.582 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.151 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.151 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:15.151 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.151 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.151 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.409 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.409 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:15.409 05:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.409 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.409 05:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.668 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.668 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:15.668 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.668 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.668 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.927 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.927 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:15.927 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.927 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.927 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.496 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.496 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:16.496 05:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.496 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.496 05:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.754 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.754 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:16.754 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.754 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.754 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.012 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.012 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:17.012 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.012 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.012 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.271 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:17.271 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.271 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.271 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.530 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 649625 00:15:17.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (649625) - No such process 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 649625 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.530 05:04:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.530 rmmod nvme_tcp 00:15:17.530 rmmod nvme_fabrics 00:15:17.530 rmmod nvme_keyring 00:15:17.787 05:04:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 649468 ']' 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 649468 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 649468 ']' 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 649468 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 649468 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 649468' 00:15:17.788 killing process with pid 649468 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 649468 00:15:17.788 05:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 649468 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.165 05:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.074 05:04:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.074 00:15:21.074 real 0m16.950s 00:15:21.074 user 0m42.472s 00:15:21.074 sys 0m5.790s 00:15:21.074 05:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.074 05:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.074 ************************************ 00:15:21.074 END TEST nvmf_connect_stress 00:15:21.074 ************************************ 00:15:21.074 05:04:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.074 05:04:27 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:21.074 05:04:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.074 05:04:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.074 05:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.074 ************************************ 00:15:21.074 START TEST nvmf_fused_ordering 00:15:21.074 ************************************ 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:21.074 * Looking for test storage... 00:15:21.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.074 05:04:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.978 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:15:22.979 00:15:22.979 --- 10.0.0.2 ping statistics --- 00:15:22.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.979 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:15:22.979 00:15:22.979 --- 10.0.0.1 ping statistics --- 00:15:22.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.979 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=652902 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 652902 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 652902 ']' 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.979 05:04:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:23.238 [2024-07-13 05:04:29.522777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:23.238 [2024-07-13 05:04:29.522935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.238 [2024-07-13 05:04:29.659705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.496 [2024-07-13 05:04:29.915203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.496 [2024-07-13 05:04:29.915292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.496 [2024-07-13 05:04:29.915320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.496 [2024-07-13 05:04:29.915345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.496 [2024-07-13 05:04:29.915366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.496 [2024-07-13 05:04:29.915413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 [2024-07-13 05:04:30.491070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 [2024-07-13 05:04:30.507281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 NULL1 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.059 05:04:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:24.319 [2024-07-13 05:04:30.579094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:24.319 [2024-07-13 05:04:30.579192] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653055 ] 00:15:24.319 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.922 Attached to nqn.2016-06.io.spdk:cnode1 00:15:24.922 Namespace ID: 1 size: 1GB 00:15:24.922 fused_ordering(0) 00:15:24.922 fused_ordering(1) 00:15:24.922 fused_ordering(2) 00:15:24.922 fused_ordering(3) 00:15:24.922 fused_ordering(4) 00:15:24.922 fused_ordering(5) 00:15:24.922 fused_ordering(6) 00:15:24.922 fused_ordering(7) 00:15:24.922 fused_ordering(8) 00:15:24.922 fused_ordering(9) 00:15:24.922 fused_ordering(10) 00:15:24.922 fused_ordering(11) 00:15:24.922 fused_ordering(12) 00:15:24.922 fused_ordering(13) 00:15:24.922 fused_ordering(14) 00:15:24.922 fused_ordering(15) 00:15:24.922 fused_ordering(16) 00:15:24.922 fused_ordering(17) 00:15:24.922 fused_ordering(18) 00:15:24.922 fused_ordering(19) 00:15:24.922 fused_ordering(20) 00:15:24.922 fused_ordering(21) 00:15:24.922 fused_ordering(22) 00:15:24.922 fused_ordering(23) 00:15:24.922 fused_ordering(24) 00:15:24.922 fused_ordering(25) 00:15:24.922 fused_ordering(26) 00:15:24.922 fused_ordering(27) 00:15:24.922 fused_ordering(28) 00:15:24.922 fused_ordering(29) 00:15:24.922 fused_ordering(30) 00:15:24.922 fused_ordering(31) 00:15:24.922 fused_ordering(32) 00:15:24.922 fused_ordering(33) 00:15:24.922 fused_ordering(34) 00:15:24.922 fused_ordering(35) 00:15:24.922 fused_ordering(36) 00:15:24.922 fused_ordering(37) 00:15:24.922 fused_ordering(38) 00:15:24.922 fused_ordering(39) 00:15:24.922 fused_ordering(40) 00:15:24.922 fused_ordering(41) 00:15:24.922 fused_ordering(42) 00:15:24.922 fused_ordering(43) 00:15:24.922 fused_ordering(44) 00:15:24.922 fused_ordering(45) 00:15:24.922 fused_ordering(46) 00:15:24.922 fused_ordering(47) 00:15:24.922 fused_ordering(48) 00:15:24.922 fused_ordering(49) 00:15:24.922 fused_ordering(50) 00:15:24.922 fused_ordering(51) 00:15:24.922 fused_ordering(52) 00:15:24.922 fused_ordering(53) 00:15:24.922 fused_ordering(54) 00:15:24.922 fused_ordering(55) 00:15:24.922 fused_ordering(56) 00:15:24.922 fused_ordering(57) 00:15:24.922 fused_ordering(58) 00:15:24.922 fused_ordering(59) 00:15:24.922 fused_ordering(60) 00:15:24.922 fused_ordering(61) 00:15:24.922 fused_ordering(62) 00:15:24.922 fused_ordering(63) 00:15:24.922 fused_ordering(64) 00:15:24.922 fused_ordering(65) 00:15:24.922 fused_ordering(66) 00:15:24.922 fused_ordering(67) 00:15:24.922 fused_ordering(68) 00:15:24.922 fused_ordering(69) 00:15:24.922 fused_ordering(70) 00:15:24.922 fused_ordering(71) 00:15:24.922 fused_ordering(72) 00:15:24.922 fused_ordering(73) 00:15:24.922 fused_ordering(74) 00:15:24.922 fused_ordering(75) 00:15:24.922 fused_ordering(76) 00:15:24.922 fused_ordering(77) 00:15:24.922 fused_ordering(78) 00:15:24.922 fused_ordering(79) 00:15:24.922 fused_ordering(80) 00:15:24.922 fused_ordering(81) 00:15:24.922 fused_ordering(82) 00:15:24.922 fused_ordering(83) 00:15:24.922 fused_ordering(84) 00:15:24.922 fused_ordering(85) 00:15:24.922 fused_ordering(86) 00:15:24.922 fused_ordering(87) 00:15:24.922 fused_ordering(88) 00:15:24.922 fused_ordering(89) 00:15:24.922 fused_ordering(90) 00:15:24.922 fused_ordering(91) 00:15:24.922 fused_ordering(92) 00:15:24.922 fused_ordering(93) 00:15:24.922 fused_ordering(94) 00:15:24.922 fused_ordering(95) 00:15:24.922 fused_ordering(96) 00:15:24.922 fused_ordering(97) 00:15:24.922 fused_ordering(98) 00:15:24.922 fused_ordering(99) 00:15:24.922 fused_ordering(100) 00:15:24.922 fused_ordering(101) 00:15:24.922 fused_ordering(102) 00:15:24.922 fused_ordering(103) 00:15:24.922 fused_ordering(104) 00:15:24.922 fused_ordering(105) 00:15:24.922 fused_ordering(106) 00:15:24.922 fused_ordering(107) 00:15:24.922 fused_ordering(108) 00:15:24.922 fused_ordering(109) 00:15:24.922 fused_ordering(110) 00:15:24.922 fused_ordering(111) 00:15:24.922 fused_ordering(112) 00:15:24.922 fused_ordering(113) 00:15:24.922 fused_ordering(114) 00:15:24.922 fused_ordering(115) 00:15:24.922 fused_ordering(116) 00:15:24.922 fused_ordering(117) 00:15:24.922 fused_ordering(118) 00:15:24.922 fused_ordering(119) 00:15:24.922 fused_ordering(120) 00:15:24.922 fused_ordering(121) 00:15:24.922 fused_ordering(122) 00:15:24.922 fused_ordering(123) 00:15:24.922 fused_ordering(124) 00:15:24.922 fused_ordering(125) 00:15:24.922 fused_ordering(126) 00:15:24.922 fused_ordering(127) 00:15:24.922 fused_ordering(128) 00:15:24.922 fused_ordering(129) 00:15:24.922 fused_ordering(130) 00:15:24.922 fused_ordering(131) 00:15:24.922 fused_ordering(132) 00:15:24.922 fused_ordering(133) 00:15:24.922 fused_ordering(134) 00:15:24.922 fused_ordering(135) 00:15:24.922 fused_ordering(136) 00:15:24.922 fused_ordering(137) 00:15:24.922 fused_ordering(138) 00:15:24.922 fused_ordering(139) 00:15:24.922 fused_ordering(140) 00:15:24.922 fused_ordering(141) 00:15:24.922 fused_ordering(142) 00:15:24.922 fused_ordering(143) 00:15:24.922 fused_ordering(144) 00:15:24.922 fused_ordering(145) 00:15:24.922 fused_ordering(146) 00:15:24.922 fused_ordering(147) 00:15:24.922 fused_ordering(148) 00:15:24.922 fused_ordering(149) 00:15:24.922 fused_ordering(150) 00:15:24.922 fused_ordering(151) 00:15:24.922 fused_ordering(152) 00:15:24.922 fused_ordering(153) 00:15:24.922 fused_ordering(154) 00:15:24.922 fused_ordering(155) 00:15:24.922 fused_ordering(156) 00:15:24.922 fused_ordering(157) 00:15:24.922 fused_ordering(158) 00:15:24.922 fused_ordering(159) 00:15:24.922 fused_ordering(160) 00:15:24.922 fused_ordering(161) 00:15:24.922 fused_ordering(162) 00:15:24.922 fused_ordering(163) 00:15:24.922 fused_ordering(164) 00:15:24.922 fused_ordering(165) 00:15:24.922 fused_ordering(166) 00:15:24.922 fused_ordering(167) 00:15:24.922 fused_ordering(168) 00:15:24.922 fused_ordering(169) 00:15:24.922 fused_ordering(170) 00:15:24.922 fused_ordering(171) 00:15:24.922 fused_ordering(172) 00:15:24.922 fused_ordering(173) 00:15:24.922 fused_ordering(174) 00:15:24.922 fused_ordering(175) 00:15:24.922 fused_ordering(176) 00:15:24.922 fused_ordering(177) 00:15:24.922 fused_ordering(178) 00:15:24.922 fused_ordering(179) 00:15:24.922 fused_ordering(180) 00:15:24.922 fused_ordering(181) 00:15:24.922 fused_ordering(182) 00:15:24.922 fused_ordering(183) 00:15:24.922 fused_ordering(184) 00:15:24.922 fused_ordering(185) 00:15:24.922 fused_ordering(186) 00:15:24.922 fused_ordering(187) 00:15:24.922 fused_ordering(188) 00:15:24.922 fused_ordering(189) 00:15:24.922 fused_ordering(190) 00:15:24.922 fused_ordering(191) 00:15:24.922 fused_ordering(192) 00:15:24.922 fused_ordering(193) 00:15:24.923 fused_ordering(194) 00:15:24.923 fused_ordering(195) 00:15:24.923 fused_ordering(196) 00:15:24.923 fused_ordering(197) 00:15:24.923 fused_ordering(198) 00:15:24.923 fused_ordering(199) 00:15:24.923 fused_ordering(200) 00:15:24.923 fused_ordering(201) 00:15:24.923 fused_ordering(202) 00:15:24.923 fused_ordering(203) 00:15:24.923 fused_ordering(204) 00:15:24.923 fused_ordering(205) 00:15:25.496 fused_ordering(206) 00:15:25.496 fused_ordering(207) 00:15:25.496 fused_ordering(208) 00:15:25.496 fused_ordering(209) 00:15:25.496 fused_ordering(210) 00:15:25.496 fused_ordering(211) 00:15:25.496 fused_ordering(212) 00:15:25.496 fused_ordering(213) 00:15:25.496 fused_ordering(214) 00:15:25.496 fused_ordering(215) 00:15:25.496 fused_ordering(216) 00:15:25.496 fused_ordering(217) 00:15:25.496 fused_ordering(218) 00:15:25.496 fused_ordering(219) 00:15:25.496 fused_ordering(220) 00:15:25.496 fused_ordering(221) 00:15:25.496 fused_ordering(222) 00:15:25.496 fused_ordering(223) 00:15:25.496 fused_ordering(224) 00:15:25.496 fused_ordering(225) 00:15:25.496 fused_ordering(226) 00:15:25.496 fused_ordering(227) 00:15:25.496 fused_ordering(228) 00:15:25.496 fused_ordering(229) 00:15:25.496 fused_ordering(230) 00:15:25.496 fused_ordering(231) 00:15:25.496 fused_ordering(232) 00:15:25.496 fused_ordering(233) 00:15:25.496 fused_ordering(234) 00:15:25.496 fused_ordering(235) 00:15:25.496 fused_ordering(236) 00:15:25.496 fused_ordering(237) 00:15:25.496 fused_ordering(238) 00:15:25.496 fused_ordering(239) 00:15:25.496 fused_ordering(240) 00:15:25.496 fused_ordering(241) 00:15:25.496 fused_ordering(242) 00:15:25.496 fused_ordering(243) 00:15:25.496 fused_ordering(244) 00:15:25.496 fused_ordering(245) 00:15:25.496 fused_ordering(246) 00:15:25.496 fused_ordering(247) 00:15:25.496 fused_ordering(248) 00:15:25.496 fused_ordering(249) 00:15:25.496 fused_ordering(250) 00:15:25.496 fused_ordering(251) 00:15:25.496 fused_ordering(252) 00:15:25.496 fused_ordering(253) 00:15:25.496 fused_ordering(254) 00:15:25.496 fused_ordering(255) 00:15:25.496 fused_ordering(256) 00:15:25.496 fused_ordering(257) 00:15:25.496 fused_ordering(258) 00:15:25.496 fused_ordering(259) 00:15:25.496 fused_ordering(260) 00:15:25.496 fused_ordering(261) 00:15:25.496 fused_ordering(262) 00:15:25.496 fused_ordering(263) 00:15:25.496 fused_ordering(264) 00:15:25.496 fused_ordering(265) 00:15:25.496 fused_ordering(266) 00:15:25.496 fused_ordering(267) 00:15:25.496 fused_ordering(268) 00:15:25.496 fused_ordering(269) 00:15:25.496 fused_ordering(270) 00:15:25.496 fused_ordering(271) 00:15:25.496 fused_ordering(272) 00:15:25.496 fused_ordering(273) 00:15:25.496 fused_ordering(274) 00:15:25.496 fused_ordering(275) 00:15:25.496 fused_ordering(276) 00:15:25.496 fused_ordering(277) 00:15:25.496 fused_ordering(278) 00:15:25.496 fused_ordering(279) 00:15:25.496 fused_ordering(280) 00:15:25.496 fused_ordering(281) 00:15:25.496 fused_ordering(282) 00:15:25.496 fused_ordering(283) 00:15:25.496 fused_ordering(284) 00:15:25.496 fused_ordering(285) 00:15:25.496 fused_ordering(286) 00:15:25.496 fused_ordering(287) 00:15:25.496 fused_ordering(288) 00:15:25.496 fused_ordering(289) 00:15:25.496 fused_ordering(290) 00:15:25.496 fused_ordering(291) 00:15:25.496 fused_ordering(292) 00:15:25.496 fused_ordering(293) 00:15:25.496 fused_ordering(294) 00:15:25.496 fused_ordering(295) 00:15:25.496 fused_ordering(296) 00:15:25.496 fused_ordering(297) 00:15:25.496 fused_ordering(298) 00:15:25.496 fused_ordering(299) 00:15:25.496 fused_ordering(300) 00:15:25.496 fused_ordering(301) 00:15:25.496 fused_ordering(302) 00:15:25.496 fused_ordering(303) 00:15:25.496 fused_ordering(304) 00:15:25.496 fused_ordering(305) 00:15:25.496 fused_ordering(306) 00:15:25.496 fused_ordering(307) 00:15:25.496 fused_ordering(308) 00:15:25.496 fused_ordering(309) 00:15:25.496 fused_ordering(310) 00:15:25.496 fused_ordering(311) 00:15:25.496 fused_ordering(312) 00:15:25.496 fused_ordering(313) 00:15:25.496 fused_ordering(314) 00:15:25.496 fused_ordering(315) 00:15:25.496 fused_ordering(316) 00:15:25.496 fused_ordering(317) 00:15:25.496 fused_ordering(318) 00:15:25.496 fused_ordering(319) 00:15:25.496 fused_ordering(320) 00:15:25.496 fused_ordering(321) 00:15:25.496 fused_ordering(322) 00:15:25.496 fused_ordering(323) 00:15:25.496 fused_ordering(324) 00:15:25.496 fused_ordering(325) 00:15:25.496 fused_ordering(326) 00:15:25.496 fused_ordering(327) 00:15:25.496 fused_ordering(328) 00:15:25.496 fused_ordering(329) 00:15:25.496 fused_ordering(330) 00:15:25.496 fused_ordering(331) 00:15:25.496 fused_ordering(332) 00:15:25.496 fused_ordering(333) 00:15:25.496 fused_ordering(334) 00:15:25.496 fused_ordering(335) 00:15:25.496 fused_ordering(336) 00:15:25.496 fused_ordering(337) 00:15:25.496 fused_ordering(338) 00:15:25.496 fused_ordering(339) 00:15:25.496 fused_ordering(340) 00:15:25.496 fused_ordering(341) 00:15:25.496 fused_ordering(342) 00:15:25.496 fused_ordering(343) 00:15:25.496 fused_ordering(344) 00:15:25.496 fused_ordering(345) 00:15:25.496 fused_ordering(346) 00:15:25.496 fused_ordering(347) 00:15:25.496 fused_ordering(348) 00:15:25.496 fused_ordering(349) 00:15:25.496 fused_ordering(350) 00:15:25.496 fused_ordering(351) 00:15:25.496 fused_ordering(352) 00:15:25.496 fused_ordering(353) 00:15:25.496 fused_ordering(354) 00:15:25.496 fused_ordering(355) 00:15:25.496 fused_ordering(356) 00:15:25.496 fused_ordering(357) 00:15:25.496 fused_ordering(358) 00:15:25.496 fused_ordering(359) 00:15:25.496 fused_ordering(360) 00:15:25.496 fused_ordering(361) 00:15:25.496 fused_ordering(362) 00:15:25.496 fused_ordering(363) 00:15:25.496 fused_ordering(364) 00:15:25.496 fused_ordering(365) 00:15:25.496 fused_ordering(366) 00:15:25.496 fused_ordering(367) 00:15:25.496 fused_ordering(368) 00:15:25.496 fused_ordering(369) 00:15:25.496 fused_ordering(370) 00:15:25.496 fused_ordering(371) 00:15:25.496 fused_ordering(372) 00:15:25.496 fused_ordering(373) 00:15:25.496 fused_ordering(374) 00:15:25.496 fused_ordering(375) 00:15:25.496 fused_ordering(376) 00:15:25.496 fused_ordering(377) 00:15:25.496 fused_ordering(378) 00:15:25.496 fused_ordering(379) 00:15:25.496 fused_ordering(380) 00:15:25.496 fused_ordering(381) 00:15:25.496 fused_ordering(382) 00:15:25.496 fused_ordering(383) 00:15:25.496 fused_ordering(384) 00:15:25.496 fused_ordering(385) 00:15:25.496 fused_ordering(386) 00:15:25.496 fused_ordering(387) 00:15:25.496 fused_ordering(388) 00:15:25.496 fused_ordering(389) 00:15:25.496 fused_ordering(390) 00:15:25.496 fused_ordering(391) 00:15:25.496 fused_ordering(392) 00:15:25.496 fused_ordering(393) 00:15:25.496 fused_ordering(394) 00:15:25.496 fused_ordering(395) 00:15:25.496 fused_ordering(396) 00:15:25.496 fused_ordering(397) 00:15:25.496 fused_ordering(398) 00:15:25.496 fused_ordering(399) 00:15:25.496 fused_ordering(400) 00:15:25.496 fused_ordering(401) 00:15:25.496 fused_ordering(402) 00:15:25.496 fused_ordering(403) 00:15:25.496 fused_ordering(404) 00:15:25.496 fused_ordering(405) 00:15:25.496 fused_ordering(406) 00:15:25.496 fused_ordering(407) 00:15:25.496 fused_ordering(408) 00:15:25.496 fused_ordering(409) 00:15:25.496 fused_ordering(410) 00:15:26.063 fused_ordering(411) 00:15:26.063 fused_ordering(412) 00:15:26.063 fused_ordering(413) 00:15:26.063 fused_ordering(414) 00:15:26.063 fused_ordering(415) 00:15:26.063 fused_ordering(416) 00:15:26.063 fused_ordering(417) 00:15:26.063 fused_ordering(418) 00:15:26.063 fused_ordering(419) 00:15:26.063 fused_ordering(420) 00:15:26.063 fused_ordering(421) 00:15:26.063 fused_ordering(422) 00:15:26.063 fused_ordering(423) 00:15:26.063 fused_ordering(424) 00:15:26.063 fused_ordering(425) 00:15:26.063 fused_ordering(426) 00:15:26.063 fused_ordering(427) 00:15:26.063 fused_ordering(428) 00:15:26.063 fused_ordering(429) 00:15:26.063 fused_ordering(430) 00:15:26.063 fused_ordering(431) 00:15:26.063 fused_ordering(432) 00:15:26.063 fused_ordering(433) 00:15:26.063 fused_ordering(434) 00:15:26.063 fused_ordering(435) 00:15:26.063 fused_ordering(436) 00:15:26.063 fused_ordering(437) 00:15:26.063 fused_ordering(438) 00:15:26.063 fused_ordering(439) 00:15:26.063 fused_ordering(440) 00:15:26.063 fused_ordering(441) 00:15:26.063 fused_ordering(442) 00:15:26.063 fused_ordering(443) 00:15:26.063 fused_ordering(444) 00:15:26.063 fused_ordering(445) 00:15:26.063 fused_ordering(446) 00:15:26.063 fused_ordering(447) 00:15:26.063 fused_ordering(448) 00:15:26.063 fused_ordering(449) 00:15:26.063 fused_ordering(450) 00:15:26.063 fused_ordering(451) 00:15:26.063 fused_ordering(452) 00:15:26.063 fused_ordering(453) 00:15:26.063 fused_ordering(454) 00:15:26.063 fused_ordering(455) 00:15:26.063 fused_ordering(456) 00:15:26.063 fused_ordering(457) 00:15:26.063 fused_ordering(458) 00:15:26.063 fused_ordering(459) 00:15:26.063 fused_ordering(460) 00:15:26.063 fused_ordering(461) 00:15:26.063 fused_ordering(462) 00:15:26.063 fused_ordering(463) 00:15:26.063 fused_ordering(464) 00:15:26.063 fused_ordering(465) 00:15:26.063 fused_ordering(466) 00:15:26.063 fused_ordering(467) 00:15:26.063 fused_ordering(468) 00:15:26.063 fused_ordering(469) 00:15:26.063 fused_ordering(470) 00:15:26.063 fused_ordering(471) 00:15:26.063 fused_ordering(472) 00:15:26.063 fused_ordering(473) 00:15:26.063 fused_ordering(474) 00:15:26.063 fused_ordering(475) 00:15:26.063 fused_ordering(476) 00:15:26.063 fused_ordering(477) 00:15:26.063 fused_ordering(478) 00:15:26.063 fused_ordering(479) 00:15:26.063 fused_ordering(480) 00:15:26.063 fused_ordering(481) 00:15:26.063 fused_ordering(482) 00:15:26.063 fused_ordering(483) 00:15:26.063 fused_ordering(484) 00:15:26.063 fused_ordering(485) 00:15:26.063 fused_ordering(486) 00:15:26.063 fused_ordering(487) 00:15:26.063 fused_ordering(488) 00:15:26.063 fused_ordering(489) 00:15:26.063 fused_ordering(490) 00:15:26.063 fused_ordering(491) 00:15:26.063 fused_ordering(492) 00:15:26.063 fused_ordering(493) 00:15:26.063 fused_ordering(494) 00:15:26.063 fused_ordering(495) 00:15:26.063 fused_ordering(496) 00:15:26.063 fused_ordering(497) 00:15:26.063 fused_ordering(498) 00:15:26.063 fused_ordering(499) 00:15:26.063 fused_ordering(500) 00:15:26.063 fused_ordering(501) 00:15:26.063 fused_ordering(502) 00:15:26.063 fused_ordering(503) 00:15:26.063 fused_ordering(504) 00:15:26.063 fused_ordering(505) 00:15:26.063 fused_ordering(506) 00:15:26.063 fused_ordering(507) 00:15:26.063 fused_ordering(508) 00:15:26.063 fused_ordering(509) 00:15:26.063 fused_ordering(510) 00:15:26.063 fused_ordering(511) 00:15:26.063 fused_ordering(512) 00:15:26.063 fused_ordering(513) 00:15:26.063 fused_ordering(514) 00:15:26.063 fused_ordering(515) 00:15:26.063 fused_ordering(516) 00:15:26.063 fused_ordering(517) 00:15:26.063 fused_ordering(518) 00:15:26.063 fused_ordering(519) 00:15:26.063 fused_ordering(520) 00:15:26.063 fused_ordering(521) 00:15:26.063 fused_ordering(522) 00:15:26.063 fused_ordering(523) 00:15:26.063 fused_ordering(524) 00:15:26.063 fused_ordering(525) 00:15:26.063 fused_ordering(526) 00:15:26.063 fused_ordering(527) 00:15:26.063 fused_ordering(528) 00:15:26.063 fused_ordering(529) 00:15:26.063 fused_ordering(530) 00:15:26.063 fused_ordering(531) 00:15:26.063 fused_ordering(532) 00:15:26.063 fused_ordering(533) 00:15:26.063 fused_ordering(534) 00:15:26.063 fused_ordering(535) 00:15:26.063 fused_ordering(536) 00:15:26.063 fused_ordering(537) 00:15:26.063 fused_ordering(538) 00:15:26.063 fused_ordering(539) 00:15:26.063 fused_ordering(540) 00:15:26.063 fused_ordering(541) 00:15:26.063 fused_ordering(542) 00:15:26.063 fused_ordering(543) 00:15:26.063 fused_ordering(544) 00:15:26.063 fused_ordering(545) 00:15:26.063 fused_ordering(546) 00:15:26.063 fused_ordering(547) 00:15:26.063 fused_ordering(548) 00:15:26.063 fused_ordering(549) 00:15:26.063 fused_ordering(550) 00:15:26.063 fused_ordering(551) 00:15:26.063 fused_ordering(552) 00:15:26.063 fused_ordering(553) 00:15:26.063 fused_ordering(554) 00:15:26.063 fused_ordering(555) 00:15:26.063 fused_ordering(556) 00:15:26.063 fused_ordering(557) 00:15:26.063 fused_ordering(558) 00:15:26.063 fused_ordering(559) 00:15:26.063 fused_ordering(560) 00:15:26.063 fused_ordering(561) 00:15:26.063 fused_ordering(562) 00:15:26.063 fused_ordering(563) 00:15:26.063 fused_ordering(564) 00:15:26.063 fused_ordering(565) 00:15:26.063 fused_ordering(566) 00:15:26.063 fused_ordering(567) 00:15:26.063 fused_ordering(568) 00:15:26.063 fused_ordering(569) 00:15:26.063 fused_ordering(570) 00:15:26.063 fused_ordering(571) 00:15:26.063 fused_ordering(572) 00:15:26.063 fused_ordering(573) 00:15:26.063 fused_ordering(574) 00:15:26.063 fused_ordering(575) 00:15:26.063 fused_ordering(576) 00:15:26.063 fused_ordering(577) 00:15:26.063 fused_ordering(578) 00:15:26.063 fused_ordering(579) 00:15:26.063 fused_ordering(580) 00:15:26.063 fused_ordering(581) 00:15:26.063 fused_ordering(582) 00:15:26.063 fused_ordering(583) 00:15:26.063 fused_ordering(584) 00:15:26.063 fused_ordering(585) 00:15:26.063 fused_ordering(586) 00:15:26.063 fused_ordering(587) 00:15:26.063 fused_ordering(588) 00:15:26.063 fused_ordering(589) 00:15:26.063 fused_ordering(590) 00:15:26.063 fused_ordering(591) 00:15:26.063 fused_ordering(592) 00:15:26.063 fused_ordering(593) 00:15:26.063 fused_ordering(594) 00:15:26.063 fused_ordering(595) 00:15:26.063 fused_ordering(596) 00:15:26.063 fused_ordering(597) 00:15:26.063 fused_ordering(598) 00:15:26.063 fused_ordering(599) 00:15:26.063 fused_ordering(600) 00:15:26.063 fused_ordering(601) 00:15:26.063 fused_ordering(602) 00:15:26.063 fused_ordering(603) 00:15:26.064 fused_ordering(604) 00:15:26.064 fused_ordering(605) 00:15:26.064 fused_ordering(606) 00:15:26.064 fused_ordering(607) 00:15:26.064 fused_ordering(608) 00:15:26.064 fused_ordering(609) 00:15:26.064 fused_ordering(610) 00:15:26.064 fused_ordering(611) 00:15:26.064 fused_ordering(612) 00:15:26.064 fused_ordering(613) 00:15:26.064 fused_ordering(614) 00:15:26.064 fused_ordering(615) 00:15:27.000 fused_ordering(616) 00:15:27.000 fused_ordering(617) 00:15:27.000 fused_ordering(618) 00:15:27.000 fused_ordering(619) 00:15:27.000 fused_ordering(620) 00:15:27.000 fused_ordering(621) 00:15:27.000 fused_ordering(622) 00:15:27.000 fused_ordering(623) 00:15:27.000 fused_ordering(624) 00:15:27.000 fused_ordering(625) 00:15:27.000 fused_ordering(626) 00:15:27.000 fused_ordering(627) 00:15:27.000 fused_ordering(628) 00:15:27.000 fused_ordering(629) 00:15:27.000 fused_ordering(630) 00:15:27.000 fused_ordering(631) 00:15:27.000 fused_ordering(632) 00:15:27.000 fused_ordering(633) 00:15:27.000 fused_ordering(634) 00:15:27.000 fused_ordering(635) 00:15:27.000 fused_ordering(636) 00:15:27.000 fused_ordering(637) 00:15:27.000 fused_ordering(638) 00:15:27.000 fused_ordering(639) 00:15:27.000 fused_ordering(640) 00:15:27.000 fused_ordering(641) 00:15:27.000 fused_ordering(642) 00:15:27.000 fused_ordering(643) 00:15:27.000 fused_ordering(644) 00:15:27.000 fused_ordering(645) 00:15:27.000 fused_ordering(646) 00:15:27.000 fused_ordering(647) 00:15:27.000 fused_ordering(648) 00:15:27.000 fused_ordering(649) 00:15:27.000 fused_ordering(650) 00:15:27.000 fused_ordering(651) 00:15:27.000 fused_ordering(652) 00:15:27.000 fused_ordering(653) 00:15:27.000 fused_ordering(654) 00:15:27.000 fused_ordering(655) 00:15:27.000 fused_ordering(656) 00:15:27.000 fused_ordering(657) 00:15:27.000 fused_ordering(658) 00:15:27.000 fused_ordering(659) 00:15:27.000 fused_ordering(660) 00:15:27.000 fused_ordering(661) 00:15:27.000 fused_ordering(662) 00:15:27.000 fused_ordering(663) 00:15:27.000 fused_ordering(664) 00:15:27.000 fused_ordering(665) 00:15:27.000 fused_ordering(666) 00:15:27.000 fused_ordering(667) 00:15:27.000 fused_ordering(668) 00:15:27.000 fused_ordering(669) 00:15:27.000 fused_ordering(670) 00:15:27.000 fused_ordering(671) 00:15:27.000 fused_ordering(672) 00:15:27.000 fused_ordering(673) 00:15:27.000 fused_ordering(674) 00:15:27.000 fused_ordering(675) 00:15:27.000 fused_ordering(676) 00:15:27.000 fused_ordering(677) 00:15:27.000 fused_ordering(678) 00:15:27.000 fused_ordering(679) 00:15:27.000 fused_ordering(680) 00:15:27.000 fused_ordering(681) 00:15:27.000 fused_ordering(682) 00:15:27.000 fused_ordering(683) 00:15:27.000 fused_ordering(684) 00:15:27.000 fused_ordering(685) 00:15:27.000 fused_ordering(686) 00:15:27.000 fused_ordering(687) 00:15:27.000 fused_ordering(688) 00:15:27.000 fused_ordering(689) 00:15:27.000 fused_ordering(690) 00:15:27.000 fused_ordering(691) 00:15:27.000 fused_ordering(692) 00:15:27.000 fused_ordering(693) 00:15:27.000 fused_ordering(694) 00:15:27.000 fused_ordering(695) 00:15:27.000 fused_ordering(696) 00:15:27.000 fused_ordering(697) 00:15:27.000 fused_ordering(698) 00:15:27.000 fused_ordering(699) 00:15:27.000 fused_ordering(700) 00:15:27.000 fused_ordering(701) 00:15:27.000 fused_ordering(702) 00:15:27.000 fused_ordering(703) 00:15:27.000 fused_ordering(704) 00:15:27.000 fused_ordering(705) 00:15:27.000 fused_ordering(706) 00:15:27.000 fused_ordering(707) 00:15:27.000 fused_ordering(708) 00:15:27.000 fused_ordering(709) 00:15:27.000 fused_ordering(710) 00:15:27.000 fused_ordering(711) 00:15:27.000 fused_ordering(712) 00:15:27.000 fused_ordering(713) 00:15:27.000 fused_ordering(714) 00:15:27.000 fused_ordering(715) 00:15:27.000 fused_ordering(716) 00:15:27.000 fused_ordering(717) 00:15:27.000 fused_ordering(718) 00:15:27.000 fused_ordering(719) 00:15:27.000 fused_ordering(720) 00:15:27.000 fused_ordering(721) 00:15:27.000 fused_ordering(722) 00:15:27.000 fused_ordering(723) 00:15:27.000 fused_ordering(724) 00:15:27.000 fused_ordering(725) 00:15:27.000 fused_ordering(726) 00:15:27.000 fused_ordering(727) 00:15:27.000 fused_ordering(728) 00:15:27.000 fused_ordering(729) 00:15:27.000 fused_ordering(730) 00:15:27.000 fused_ordering(731) 00:15:27.000 fused_ordering(732) 00:15:27.000 fused_ordering(733) 00:15:27.000 fused_ordering(734) 00:15:27.000 fused_ordering(735) 00:15:27.000 fused_ordering(736) 00:15:27.000 fused_ordering(737) 00:15:27.000 fused_ordering(738) 00:15:27.000 fused_ordering(739) 00:15:27.000 fused_ordering(740) 00:15:27.000 fused_ordering(741) 00:15:27.000 fused_ordering(742) 00:15:27.000 fused_ordering(743) 00:15:27.000 fused_ordering(744) 00:15:27.000 fused_ordering(745) 00:15:27.000 fused_ordering(746) 00:15:27.000 fused_ordering(747) 00:15:27.000 fused_ordering(748) 00:15:27.000 fused_ordering(749) 00:15:27.000 fused_ordering(750) 00:15:27.000 fused_ordering(751) 00:15:27.000 fused_ordering(752) 00:15:27.000 fused_ordering(753) 00:15:27.000 fused_ordering(754) 00:15:27.000 fused_ordering(755) 00:15:27.000 fused_ordering(756) 00:15:27.000 fused_ordering(757) 00:15:27.000 fused_ordering(758) 00:15:27.000 fused_ordering(759) 00:15:27.000 fused_ordering(760) 00:15:27.000 fused_ordering(761) 00:15:27.000 fused_ordering(762) 00:15:27.000 fused_ordering(763) 00:15:27.000 fused_ordering(764) 00:15:27.000 fused_ordering(765) 00:15:27.000 fused_ordering(766) 00:15:27.000 fused_ordering(767) 00:15:27.000 fused_ordering(768) 00:15:27.000 fused_ordering(769) 00:15:27.000 fused_ordering(770) 00:15:27.000 fused_ordering(771) 00:15:27.000 fused_ordering(772) 00:15:27.000 fused_ordering(773) 00:15:27.000 fused_ordering(774) 00:15:27.000 fused_ordering(775) 00:15:27.000 fused_ordering(776) 00:15:27.000 fused_ordering(777) 00:15:27.000 fused_ordering(778) 00:15:27.000 fused_ordering(779) 00:15:27.001 fused_ordering(780) 00:15:27.001 fused_ordering(781) 00:15:27.001 fused_ordering(782) 00:15:27.001 fused_ordering(783) 00:15:27.001 fused_ordering(784) 00:15:27.001 fused_ordering(785) 00:15:27.001 fused_ordering(786) 00:15:27.001 fused_ordering(787) 00:15:27.001 fused_ordering(788) 00:15:27.001 fused_ordering(789) 00:15:27.001 fused_ordering(790) 00:15:27.001 fused_ordering(791) 00:15:27.001 fused_ordering(792) 00:15:27.001 fused_ordering(793) 00:15:27.001 fused_ordering(794) 00:15:27.001 fused_ordering(795) 00:15:27.001 fused_ordering(796) 00:15:27.001 fused_ordering(797) 00:15:27.001 fused_ordering(798) 00:15:27.001 fused_ordering(799) 00:15:27.001 fused_ordering(800) 00:15:27.001 fused_ordering(801) 00:15:27.001 fused_ordering(802) 00:15:27.001 fused_ordering(803) 00:15:27.001 fused_ordering(804) 00:15:27.001 fused_ordering(805) 00:15:27.001 fused_ordering(806) 00:15:27.001 fused_ordering(807) 00:15:27.001 fused_ordering(808) 00:15:27.001 fused_ordering(809) 00:15:27.001 fused_ordering(810) 00:15:27.001 fused_ordering(811) 00:15:27.001 fused_ordering(812) 00:15:27.001 fused_ordering(813) 00:15:27.001 fused_ordering(814) 00:15:27.001 fused_ordering(815) 00:15:27.001 fused_ordering(816) 00:15:27.001 fused_ordering(817) 00:15:27.001 fused_ordering(818) 00:15:27.001 fused_ordering(819) 00:15:27.001 fused_ordering(820) 00:15:27.934 fused_ordering(821) 00:15:27.934 fused_ordering(822) 00:15:27.934 fused_ordering(823) 00:15:27.934 fused_ordering(824) 00:15:27.934 fused_ordering(825) 00:15:27.934 fused_ordering(826) 00:15:27.934 fused_ordering(827) 00:15:27.934 fused_ordering(828) 00:15:27.934 fused_ordering(829) 00:15:27.934 fused_ordering(830) 00:15:27.934 fused_ordering(831) 00:15:27.934 fused_ordering(832) 00:15:27.934 fused_ordering(833) 00:15:27.934 fused_ordering(834) 00:15:27.934 fused_ordering(835) 00:15:27.934 fused_ordering(836) 00:15:27.934 fused_ordering(837) 00:15:27.934 fused_ordering(838) 00:15:27.934 fused_ordering(839) 00:15:27.934 fused_ordering(840) 00:15:27.934 fused_ordering(841) 00:15:27.934 fused_ordering(842) 00:15:27.934 fused_ordering(843) 00:15:27.934 fused_ordering(844) 00:15:27.934 fused_ordering(845) 00:15:27.934 fused_ordering(846) 00:15:27.934 fused_ordering(847) 00:15:27.934 fused_ordering(848) 00:15:27.934 fused_ordering(849) 00:15:27.934 fused_ordering(850) 00:15:27.934 fused_ordering(851) 00:15:27.934 fused_ordering(852) 00:15:27.934 fused_ordering(853) 00:15:27.934 fused_ordering(854) 00:15:27.934 fused_ordering(855) 00:15:27.934 fused_ordering(856) 00:15:27.934 fused_ordering(857) 00:15:27.934 fused_ordering(858) 00:15:27.934 fused_ordering(859) 00:15:27.934 fused_ordering(860) 00:15:27.934 fused_ordering(861) 00:15:27.934 fused_ordering(862) 00:15:27.934 fused_ordering(863) 00:15:27.934 fused_ordering(864) 00:15:27.934 fused_ordering(865) 00:15:27.934 fused_ordering(866) 00:15:27.934 fused_ordering(867) 00:15:27.934 fused_ordering(868) 00:15:27.934 fused_ordering(869) 00:15:27.934 fused_ordering(870) 00:15:27.934 fused_ordering(871) 00:15:27.934 fused_ordering(872) 00:15:27.934 fused_ordering(873) 00:15:27.934 fused_ordering(874) 00:15:27.934 fused_ordering(875) 00:15:27.934 fused_ordering(876) 00:15:27.934 fused_ordering(877) 00:15:27.934 fused_ordering(878) 00:15:27.934 fused_ordering(879) 00:15:27.934 fused_ordering(880) 00:15:27.934 fused_ordering(881) 00:15:27.934 fused_ordering(882) 00:15:27.934 fused_ordering(883) 00:15:27.934 fused_ordering(884) 00:15:27.934 fused_ordering(885) 00:15:27.934 fused_ordering(886) 00:15:27.934 fused_ordering(887) 00:15:27.934 fused_ordering(888) 00:15:27.934 fused_ordering(889) 00:15:27.934 fused_ordering(890) 00:15:27.934 fused_ordering(891) 00:15:27.934 fused_ordering(892) 00:15:27.934 fused_ordering(893) 00:15:27.934 fused_ordering(894) 00:15:27.934 fused_ordering(895) 00:15:27.934 fused_ordering(896) 00:15:27.934 fused_ordering(897) 00:15:27.934 fused_ordering(898) 00:15:27.934 fused_ordering(899) 00:15:27.934 fused_ordering(900) 00:15:27.934 fused_ordering(901) 00:15:27.934 fused_ordering(902) 00:15:27.934 fused_ordering(903) 00:15:27.934 fused_ordering(904) 00:15:27.934 fused_ordering(905) 00:15:27.934 fused_ordering(906) 00:15:27.934 fused_ordering(907) 00:15:27.934 fused_ordering(908) 00:15:27.934 fused_ordering(909) 00:15:27.934 fused_ordering(910) 00:15:27.934 fused_ordering(911) 00:15:27.934 fused_ordering(912) 00:15:27.934 fused_ordering(913) 00:15:27.934 fused_ordering(914) 00:15:27.934 fused_ordering(915) 00:15:27.934 fused_ordering(916) 00:15:27.934 fused_ordering(917) 00:15:27.934 fused_ordering(918) 00:15:27.934 fused_ordering(919) 00:15:27.934 fused_ordering(920) 00:15:27.934 fused_ordering(921) 00:15:27.934 fused_ordering(922) 00:15:27.934 fused_ordering(923) 00:15:27.934 fused_ordering(924) 00:15:27.934 fused_ordering(925) 00:15:27.934 fused_ordering(926) 00:15:27.934 fused_ordering(927) 00:15:27.934 fused_ordering(928) 00:15:27.934 fused_ordering(929) 00:15:27.934 fused_ordering(930) 00:15:27.934 fused_ordering(931) 00:15:27.935 fused_ordering(932) 00:15:27.935 fused_ordering(933) 00:15:27.935 fused_ordering(934) 00:15:27.935 fused_ordering(935) 00:15:27.935 fused_ordering(936) 00:15:27.935 fused_ordering(937) 00:15:27.935 fused_ordering(938) 00:15:27.935 fused_ordering(939) 00:15:27.935 fused_ordering(940) 00:15:27.935 fused_ordering(941) 00:15:27.935 fused_ordering(942) 00:15:27.935 fused_ordering(943) 00:15:27.935 fused_ordering(944) 00:15:27.935 fused_ordering(945) 00:15:27.935 fused_ordering(946) 00:15:27.935 fused_ordering(947) 00:15:27.935 fused_ordering(948) 00:15:27.935 fused_ordering(949) 00:15:27.935 fused_ordering(950) 00:15:27.935 fused_ordering(951) 00:15:27.935 fused_ordering(952) 00:15:27.935 fused_ordering(953) 00:15:27.935 fused_ordering(954) 00:15:27.935 fused_ordering(955) 00:15:27.935 fused_ordering(956) 00:15:27.935 fused_ordering(957) 00:15:27.935 fused_ordering(958) 00:15:27.935 fused_ordering(959) 00:15:27.935 fused_ordering(960) 00:15:27.935 fused_ordering(961) 00:15:27.935 fused_ordering(962) 00:15:27.935 fused_ordering(963) 00:15:27.935 fused_ordering(964) 00:15:27.935 fused_ordering(965) 00:15:27.935 fused_ordering(966) 00:15:27.935 fused_ordering(967) 00:15:27.935 fused_ordering(968) 00:15:27.935 fused_ordering(969) 00:15:27.935 fused_ordering(970) 00:15:27.935 fused_ordering(971) 00:15:27.935 fused_ordering(972) 00:15:27.935 fused_ordering(973) 00:15:27.935 fused_ordering(974) 00:15:27.935 fused_ordering(975) 00:15:27.935 fused_ordering(976) 00:15:27.935 fused_ordering(977) 00:15:27.935 fused_ordering(978) 00:15:27.935 fused_ordering(979) 00:15:27.935 fused_ordering(980) 00:15:27.935 fused_ordering(981) 00:15:27.935 fused_ordering(982) 00:15:27.935 fused_ordering(983) 00:15:27.935 fused_ordering(984) 00:15:27.935 fused_ordering(985) 00:15:27.935 fused_ordering(986) 00:15:27.935 fused_ordering(987) 00:15:27.935 fused_ordering(988) 00:15:27.935 fused_ordering(989) 00:15:27.935 fused_ordering(990) 00:15:27.935 fused_ordering(991) 00:15:27.935 fused_ordering(992) 00:15:27.935 fused_ordering(993) 00:15:27.935 fused_ordering(994) 00:15:27.935 fused_ordering(995) 00:15:27.935 fused_ordering(996) 00:15:27.935 fused_ordering(997) 00:15:27.935 fused_ordering(998) 00:15:27.935 fused_ordering(999) 00:15:27.935 fused_ordering(1000) 00:15:27.935 fused_ordering(1001) 00:15:27.935 fused_ordering(1002) 00:15:27.935 fused_ordering(1003) 00:15:27.935 fused_ordering(1004) 00:15:27.935 fused_ordering(1005) 00:15:27.935 fused_ordering(1006) 00:15:27.935 fused_ordering(1007) 00:15:27.935 fused_ordering(1008) 00:15:27.935 fused_ordering(1009) 00:15:27.935 fused_ordering(1010) 00:15:27.935 fused_ordering(1011) 00:15:27.935 fused_ordering(1012) 00:15:27.935 fused_ordering(1013) 00:15:27.935 fused_ordering(1014) 00:15:27.935 fused_ordering(1015) 00:15:27.935 fused_ordering(1016) 00:15:27.935 fused_ordering(1017) 00:15:27.935 fused_ordering(1018) 00:15:27.935 fused_ordering(1019) 00:15:27.935 fused_ordering(1020) 00:15:27.935 fused_ordering(1021) 00:15:27.935 fused_ordering(1022) 00:15:27.935 fused_ordering(1023) 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.935 rmmod nvme_tcp 00:15:27.935 rmmod nvme_fabrics 00:15:27.935 rmmod nvme_keyring 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 652902 ']' 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 652902 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 652902 ']' 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 652902 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 652902 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 652902' 00:15:27.935 killing process with pid 652902 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 652902 00:15:27.935 05:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 652902 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.314 05:04:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.219 05:04:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.219 00:15:31.219 real 0m10.135s 00:15:31.219 user 0m8.089s 00:15:31.219 sys 0m3.828s 00:15:31.219 05:04:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.219 05:04:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:31.219 ************************************ 00:15:31.219 END TEST nvmf_fused_ordering 00:15:31.219 ************************************ 00:15:31.219 05:04:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.219 05:04:37 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:31.219 05:04:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.219 05:04:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.219 05:04:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.219 ************************************ 00:15:31.219 START TEST nvmf_delete_subsystem 00:15:31.219 ************************************ 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:31.219 * Looking for test storage... 00:15:31.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.219 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.220 05:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.126 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:15:33.127 00:15:33.127 --- 10.0.0.2 ping statistics --- 00:15:33.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.127 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:33.127 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:15:33.385 00:15:33.385 --- 10.0.0.1 ping statistics --- 00:15:33.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.385 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.385 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=655509 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 655509 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 655509 ']' 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.386 05:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 [2024-07-13 05:04:39.745680] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:33.386 [2024-07-13 05:04:39.745820] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.386 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.644 [2024-07-13 05:04:39.887591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.903 [2024-07-13 05:04:40.147657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.903 [2024-07-13 05:04:40.147722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.903 [2024-07-13 05:04:40.147755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.903 [2024-07-13 05:04:40.147776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.903 [2024-07-13 05:04:40.147797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.903 [2024-07-13 05:04:40.147905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.903 [2024-07-13 05:04:40.147912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.162 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.162 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:15:34.162 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.162 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.162 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 [2024-07-13 05:04:40.686011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 [2024-07-13 05:04:40.703540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 NULL1 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 Delay0 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=655665 00:15:34.422 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:34.423 05:04:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:34.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.423 [2024-07-13 05:04:40.827965] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:36.327 05:04:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.327 05:04:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.327 05:04:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 [2024-07-13 05:04:43.133999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.897 starting I/O failed: -6 00:15:36.897 Write completed with error (sct=0, sc=8) 00:15:36.897 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 starting I/O failed: -6 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 [2024-07-13 05:04:43.136225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Write completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 Read completed with error (sct=0, sc=8) 00:15:36.898 [2024-07-13 05:04:43.136893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:15:37.836 [2024-07-13 05:04:44.091345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 [2024-07-13 05:04:44.136008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 [2024-07-13 05:04:44.137601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 [2024-07-13 05:04:44.138081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Read completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.836 Write completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Read completed with error (sct=0, sc=8) 00:15:37.837 Write completed with error (sct=0, sc=8) 00:15:37.837 [2024-07-13 05:04:44.138583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:15:37.837 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.837 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:37.837 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 655665 00:15:37.837 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:37.837 Initializing NVMe Controllers 00:15:37.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:37.837 Controller IO queue size 128, less than required. 00:15:37.837 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:37.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:37.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:37.837 Initialization complete. Launching workers. 00:15:37.837 ======================================================== 00:15:37.837 Latency(us) 00:15:37.837 Device Information : IOPS MiB/s Average min max 00:15:37.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.86 0.10 945958.31 2450.44 1016919.22 00:15:37.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.81 0.08 883173.94 828.61 1019080.12 00:15:37.837 ======================================================== 00:15:37.837 Total : 348.68 0.17 918261.94 828.61 1019080.12 00:15:37.837 00:15:37.837 [2024-07-13 05:04:44.143398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:15:37.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 655665 00:15:38.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (655665) - No such process 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 655665 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 655665 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 655665 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:38.409 [2024-07-13 05:04:44.661669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=656073 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:38.409 05:04:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.409 [2024-07-13 05:04:44.782069] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:38.978 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:38.978 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:38.978 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:39.238 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:39.239 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:39.239 05:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:39.808 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:39.808 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:39.808 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:40.465 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:40.465 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:40.465 05:04:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:40.724 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:40.724 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:40.724 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:41.293 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:41.293 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:41.293 05:04:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:41.551 Initializing NVMe Controllers 00:15:41.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.551 Controller IO queue size 128, less than required. 00:15:41.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:41.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:41.551 Initialization complete. Launching workers. 00:15:41.551 ======================================================== 00:15:41.551 Latency(us) 00:15:41.551 Device Information : IOPS MiB/s Average min max 00:15:41.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005602.19 1000255.53 1016886.77 00:15:41.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005354.12 1000270.52 1041821.95 00:15:41.551 ======================================================== 00:15:41.551 Total : 256.00 0.12 1005478.15 1000255.53 1041821.95 00:15:41.551 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 656073 00:15:41.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (656073) - No such process 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 656073 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.811 rmmod nvme_tcp 00:15:41.811 rmmod nvme_fabrics 00:15:41.811 rmmod nvme_keyring 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 655509 ']' 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 655509 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 655509 ']' 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 655509 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 655509 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 655509' 00:15:41.811 killing process with pid 655509 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 655509 00:15:41.811 05:04:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 655509 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.193 05:04:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.733 05:04:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.733 00:15:45.733 real 0m14.010s 00:15:45.733 user 0m30.890s 00:15:45.733 sys 0m3.181s 00:15:45.733 05:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.733 05:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:45.733 ************************************ 00:15:45.733 END TEST nvmf_delete_subsystem 00:15:45.733 ************************************ 00:15:45.733 05:04:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.733 05:04:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:45.733 05:04:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.733 05:04:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.733 05:04:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.733 ************************************ 00:15:45.733 START TEST nvmf_ns_masking 00:15:45.733 ************************************ 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:45.733 * Looking for test storage... 00:15:45.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ef6968d7-71cb-4725-9e87-0dee2182d8b5 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d8314464-edba-4ab8-9382-c343a830abee 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ce2d36a3-2c74-445e-b8cc-8bd0d7f47d0e 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.733 05:04:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.639 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:47.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:47.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:47.640 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:47.640 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:15:47.640 00:15:47.640 --- 10.0.0.2 ping statistics --- 00:15:47.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.640 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:47.640 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:47.640 00:15:47.640 --- 10.0.0.1 ping statistics --- 00:15:47.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.640 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=658544 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 658544 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 658544 ']' 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.641 05:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.641 [2024-07-13 05:04:53.942562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:47.641 [2024-07-13 05:04:53.942702] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.641 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.641 [2024-07-13 05:04:54.084722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.900 [2024-07-13 05:04:54.328839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.900 [2024-07-13 05:04:54.328944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.901 [2024-07-13 05:04:54.328968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.901 [2024-07-13 05:04:54.328990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.901 [2024-07-13 05:04:54.329008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.901 [2024-07-13 05:04:54.329051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.466 05:04:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:48.724 [2024-07-13 05:04:55.199463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.724 05:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:48.724 05:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:48.724 05:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:49.295 Malloc1 00:15:49.295 05:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:49.552 Malloc2 00:15:49.552 05:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:49.813 05:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:50.070 05:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.329 [2024-07-13 05:04:56.708596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.329 05:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:50.329 05:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ce2d36a3-2c74-445e-b8cc-8bd0d7f47d0e -a 10.0.0.2 -s 4420 -i 4 00:15:50.589 05:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:50.589 05:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:50.589 05:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.589 05:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:50.589 05:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:52.493 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:52.493 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:52.494 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:52.751 [ 0]:0x1 00:15:52.751 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:52.751 05:04:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:52.751 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c092f03329584911b1708418c9150aff 00:15:52.751 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c092f03329584911b1708418c9150aff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:52.751 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.011 [ 0]:0x1 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c092f03329584911b1708418c9150aff 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c092f03329584911b1708418c9150aff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.011 [ 1]:0x2 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:53.011 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.270 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.529 05:04:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:53.788 05:05:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:53.788 05:05:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ce2d36a3-2c74-445e-b8cc-8bd0d7f47d0e -a 10.0.0.2 -s 4420 -i 4 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:54.047 05:05:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:55.951 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:56.271 [ 0]:0x2 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.271 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:56.559 [ 0]:0x1 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c092f03329584911b1708418c9150aff 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c092f03329584911b1708418c9150aff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:56.559 [ 1]:0x2 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.559 05:05:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:56.818 [ 0]:0x2 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:56.818 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.077 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ce2d36a3-2c74-445e-b8cc-8bd0d7f47d0e -a 10.0.0.2 -s 4420 -i 4 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:57.335 05:05:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:59.876 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:59.876 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:59.876 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.876 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:59.877 [ 0]:0x1 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c092f03329584911b1708418c9150aff 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c092f03329584911b1708418c9150aff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:59.877 [ 1]:0x2 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.877 05:05:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:59.877 [ 0]:0x2 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:59.877 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:00.137 [2024-07-13 05:05:06.627337] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:00.137 request: 00:16:00.137 { 00:16:00.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.137 "nsid": 2, 00:16:00.137 "host": "nqn.2016-06.io.spdk:host1", 00:16:00.137 "method": "nvmf_ns_remove_host", 00:16:00.137 "req_id": 1 00:16:00.137 } 00:16:00.137 Got JSON-RPC error response 00:16:00.137 response: 00:16:00.137 { 00:16:00.137 "code": -32602, 00:16:00.137 "message": "Invalid parameters" 00:16:00.137 } 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:00.396 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:00.397 [ 0]:0x2 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=824290229dea41eba8d4cfeb7b72b39b 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 824290229dea41eba8d4cfeb7b72b39b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:00.397 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=660296 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 660296 /var/tmp/host.sock 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 660296 ']' 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:00.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.657 05:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:00.657 [2024-07-13 05:05:07.015045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:00.657 [2024-07-13 05:05:07.015188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660296 ] 00:16:00.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.657 [2024-07-13 05:05:07.142970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.918 [2024-07-13 05:05:07.375646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.853 05:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.853 05:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:01.853 05:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.111 05:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:02.370 05:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ef6968d7-71cb-4725-9e87-0dee2182d8b5 00:16:02.370 05:05:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:02.370 05:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g EF6968D771CB47259E870DEE2182D8B5 -i 00:16:02.628 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d8314464-edba-4ab8-9382-c343a830abee 00:16:02.628 05:05:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:02.628 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D8314464EDBA4AB89382C343A830ABEE -i 00:16:02.885 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:03.142 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:03.400 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:03.400 05:05:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:03.967 nvme0n1 00:16:03.967 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:03.967 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:04.224 nvme1n2 00:16:04.224 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:04.224 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:04.224 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:04.224 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:04.224 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:04.484 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:04.484 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:04.484 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:04.484 05:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:04.741 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ef6968d7-71cb-4725-9e87-0dee2182d8b5 == \e\f\6\9\6\8\d\7\-\7\1\c\b\-\4\7\2\5\-\9\e\8\7\-\0\d\e\e\2\1\8\2\d\8\b\5 ]] 00:16:04.741 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:04.741 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:04.741 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d8314464-edba-4ab8-9382-c343a830abee == \d\8\3\1\4\4\6\4\-\e\d\b\a\-\4\a\b\8\-\9\3\8\2\-\c\3\4\3\a\8\3\0\a\b\e\e ]] 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 660296 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 660296 ']' 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 660296 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660296 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660296' 00:16:05.000 killing process with pid 660296 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 660296 00:16:05.000 05:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 660296 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.531 05:05:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.531 rmmod nvme_tcp 00:16:07.531 rmmod nvme_fabrics 00:16:07.531 rmmod nvme_keyring 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 658544 ']' 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 658544 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 658544 ']' 00:16:07.531 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 658544 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 658544 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 658544' 00:16:07.792 killing process with pid 658544 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 658544 00:16:07.792 05:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 658544 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.703 05:05:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.611 05:05:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.611 00:16:11.611 real 0m26.086s 00:16:11.611 user 0m35.324s 00:16:11.611 sys 0m4.450s 00:16:11.611 05:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.611 05:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:11.611 ************************************ 00:16:11.611 END TEST nvmf_ns_masking 00:16:11.611 ************************************ 00:16:11.611 05:05:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:11.611 05:05:17 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:11.611 05:05:17 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:11.611 05:05:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.611 05:05:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.611 05:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.611 ************************************ 00:16:11.611 START TEST nvmf_nvme_cli 00:16:11.611 ************************************ 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:11.611 * Looking for test storage... 00:16:11.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.611 05:05:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:13.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:13.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:13.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:13.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.609 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:16:13.610 00:16:13.610 --- 10.0.0.2 ping statistics --- 00:16:13.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.610 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:16:13.610 00:16:13.610 --- 10.0.0.1 ping statistics --- 00:16:13.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.610 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=663191 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 663191 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 663191 ']' 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.610 05:05:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:13.610 [2024-07-13 05:05:20.072507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:13.610 [2024-07-13 05:05:20.072650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.870 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.870 [2024-07-13 05:05:20.220506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.130 [2024-07-13 05:05:20.484188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.130 [2024-07-13 05:05:20.484271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.130 [2024-07-13 05:05:20.484300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.130 [2024-07-13 05:05:20.484322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.130 [2024-07-13 05:05:20.484344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.130 [2024-07-13 05:05:20.484474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.130 [2024-07-13 05:05:20.484777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.130 [2024-07-13 05:05:20.484829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.130 [2024-07-13 05:05:20.484840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.697 05:05:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.697 05:05:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:14.697 05:05:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.697 05:05:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.697 05:05:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 [2024-07-13 05:05:21.026969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 Malloc0 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 Malloc1 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 [2024-07-13 05:05:21.217760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:14.956 00:16:14.956 Discovery Log Number of Records 2, Generation counter 2 00:16:14.956 =====Discovery Log Entry 0====== 00:16:14.956 trtype: tcp 00:16:14.956 adrfam: ipv4 00:16:14.956 subtype: current discovery subsystem 00:16:14.956 treq: not required 00:16:14.956 portid: 0 00:16:14.956 trsvcid: 4420 00:16:14.956 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:14.956 traddr: 10.0.0.2 00:16:14.956 eflags: explicit discovery connections, duplicate discovery information 00:16:14.956 sectype: none 00:16:14.956 =====Discovery Log Entry 1====== 00:16:14.956 trtype: tcp 00:16:14.956 adrfam: ipv4 00:16:14.956 subtype: nvme subsystem 00:16:14.956 treq: not required 00:16:14.956 portid: 0 00:16:14.956 trsvcid: 4420 00:16:14.956 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:14.956 traddr: 10.0.0.2 00:16:14.956 eflags: none 00:16:14.956 sectype: none 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:14.956 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:15.523 05:05:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:18.058 05:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:18.058 05:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:18.058 05:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:18.058 /dev/nvme0n1 ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:18.058 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.315 rmmod nvme_tcp 00:16:18.315 rmmod nvme_fabrics 00:16:18.315 rmmod nvme_keyring 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 663191 ']' 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 663191 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 663191 ']' 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 663191 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 663191 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 663191' 00:16:18.315 killing process with pid 663191 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 663191 00:16:18.315 05:05:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 663191 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.221 05:05:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.125 05:05:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.125 00:16:22.125 real 0m10.601s 00:16:22.125 user 0m22.362s 00:16:22.125 sys 0m2.417s 00:16:22.125 05:05:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.125 05:05:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 ************************************ 00:16:22.125 END TEST nvmf_nvme_cli 00:16:22.125 ************************************ 00:16:22.125 05:05:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.125 05:05:28 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:22.125 05:05:28 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:22.125 05:05:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.125 05:05:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.125 05:05:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 ************************************ 00:16:22.125 START TEST nvmf_host_management 00:16:22.125 ************************************ 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:22.125 * Looking for test storage... 00:16:22.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.125 05:05:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:24.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:24.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.660 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:24.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:24.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:16:24.661 00:16:24.661 --- 10.0.0.2 ping statistics --- 00:16:24.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.661 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:24.661 00:16:24.661 --- 10.0.0.1 ping statistics --- 00:16:24.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.661 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=665948 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 665948 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 665948 ']' 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.661 05:05:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.661 [2024-07-13 05:05:30.825614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:24.661 [2024-07-13 05:05:30.825758] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.661 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.661 [2024-07-13 05:05:30.970929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.920 [2024-07-13 05:05:31.238135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.920 [2024-07-13 05:05:31.238212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.920 [2024-07-13 05:05:31.238245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.920 [2024-07-13 05:05:31.238266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.920 [2024-07-13 05:05:31.238289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.920 [2024-07-13 05:05:31.238412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.920 [2024-07-13 05:05:31.238473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.920 [2024-07-13 05:05:31.238523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.920 [2024-07-13 05:05:31.238534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 [2024-07-13 05:05:31.763213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 Malloc0 00:16:25.486 [2024-07-13 05:05:31.880602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=666124 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 666124 /var/tmp/bdevperf.sock 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 666124 ']' 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:25.486 { 00:16:25.486 "params": { 00:16:25.486 "name": "Nvme$subsystem", 00:16:25.486 "trtype": "$TEST_TRANSPORT", 00:16:25.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.486 "adrfam": "ipv4", 00:16:25.486 "trsvcid": "$NVMF_PORT", 00:16:25.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.486 "hdgst": ${hdgst:-false}, 00:16:25.486 "ddgst": ${ddgst:-false} 00:16:25.486 }, 00:16:25.486 "method": "bdev_nvme_attach_controller" 00:16:25.486 } 00:16:25.486 EOF 00:16:25.486 )") 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:25.486 05:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:25.486 "params": { 00:16:25.486 "name": "Nvme0", 00:16:25.486 "trtype": "tcp", 00:16:25.486 "traddr": "10.0.0.2", 00:16:25.486 "adrfam": "ipv4", 00:16:25.486 "trsvcid": "4420", 00:16:25.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:25.486 "hdgst": false, 00:16:25.486 "ddgst": false 00:16:25.486 }, 00:16:25.486 "method": "bdev_nvme_attach_controller" 00:16:25.486 }' 00:16:25.744 [2024-07-13 05:05:31.997261] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:25.744 [2024-07-13 05:05:31.997409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666124 ] 00:16:25.744 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.744 [2024-07-13 05:05:32.123577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.002 [2024-07-13 05:05:32.361989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.567 Running I/O for 10 seconds... 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:26.567 05:05:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.825 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 [2024-07-13 05:05:33.273721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.273825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.273888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.273927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.273953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.273975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.274967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.274991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.275020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.275044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.275066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.825 [2024-07-13 05:05:33.275090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.825 [2024-07-13 05:05:33.275112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.275958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.275988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.276934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.826 [2024-07-13 05:05:33.276956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.277276] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:16:26.826 [2024-07-13 05:05:33.277401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.826 [2024-07-13 05:05:33.277432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.277457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.826 [2024-07-13 05:05:33.277479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.277501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.826 [2024-07-13 05:05:33.277522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.277544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.826 [2024-07-13 05:05:33.277564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.826 [2024-07-13 05:05:33.277584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:16:26.826 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.826 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:26.826 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.826 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.826 [2024-07-13 05:05:33.278814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:26.826 task offset: 60416 on job bdev=Nvme0n1 fails 00:16:26.826 00:16:26.827 Latency(us) 00:16:26.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.827 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.827 Job: Nvme0n1 ended in about 0.37 seconds with error 00:16:26.827 Verification LBA range: start 0x0 length 0x400 00:16:26.827 Nvme0n1 : 0.37 1203.40 75.21 171.91 0.00 45037.25 4563.25 41360.50 00:16:26.827 =================================================================================================================== 00:16:26.827 Total : 1203.40 75.21 171.91 0.00 45037.25 4563.25 41360.50 00:16:26.827 [2024-07-13 05:05:33.284061] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:26.827 [2024-07-13 05:05:33.284110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:16:26.827 05:05:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.827 05:05:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:27.084 [2024-07-13 05:05:33.427171] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.018 05:05:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 666124 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:28.019 { 00:16:28.019 "params": { 00:16:28.019 "name": "Nvme$subsystem", 00:16:28.019 "trtype": "$TEST_TRANSPORT", 00:16:28.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.019 "adrfam": "ipv4", 00:16:28.019 "trsvcid": "$NVMF_PORT", 00:16:28.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.019 "hdgst": ${hdgst:-false}, 00:16:28.019 "ddgst": ${ddgst:-false} 00:16:28.019 }, 00:16:28.019 "method": "bdev_nvme_attach_controller" 00:16:28.019 } 00:16:28.019 EOF 00:16:28.019 )") 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:28.019 05:05:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:28.019 "params": { 00:16:28.019 "name": "Nvme0", 00:16:28.019 "trtype": "tcp", 00:16:28.019 "traddr": "10.0.0.2", 00:16:28.019 "adrfam": "ipv4", 00:16:28.019 "trsvcid": "4420", 00:16:28.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:28.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:28.019 "hdgst": false, 00:16:28.019 "ddgst": false 00:16:28.019 }, 00:16:28.019 "method": "bdev_nvme_attach_controller" 00:16:28.019 }' 00:16:28.019 [2024-07-13 05:05:34.368377] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:28.019 [2024-07-13 05:05:34.368518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666405 ] 00:16:28.019 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.019 [2024-07-13 05:05:34.497895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.277 [2024-07-13 05:05:34.738207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.844 Running I/O for 1 seconds... 00:16:30.285 00:16:30.285 Latency(us) 00:16:30.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:30.285 Verification LBA range: start 0x0 length 0x400 00:16:30.285 Nvme0n1 : 1.04 1287.73 80.48 0.00 0.00 48871.26 10340.12 41360.50 00:16:30.285 =================================================================================================================== 00:16:30.285 Total : 1287.73 80.48 0.00 0.00 48871.26 10340.12 41360.50 00:16:31.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 666124 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.220 rmmod nvme_tcp 00:16:31.220 rmmod nvme_fabrics 00:16:31.220 rmmod nvme_keyring 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 665948 ']' 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 665948 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 665948 ']' 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 665948 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 665948 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 665948' 00:16:31.220 killing process with pid 665948 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 665948 00:16:31.220 05:05:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 665948 00:16:32.593 [2024-07-13 05:05:38.757802] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:32.593 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.594 05:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.493 05:05:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.493 05:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:34.493 00:16:34.493 real 0m12.442s 00:16:34.493 user 0m34.525s 00:16:34.493 sys 0m3.119s 00:16:34.493 05:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.493 05:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 ************************************ 00:16:34.493 END TEST nvmf_host_management 00:16:34.493 ************************************ 00:16:34.493 05:05:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.493 05:05:40 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:34.493 05:05:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.493 05:05:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.493 05:05:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 ************************************ 00:16:34.493 START TEST nvmf_lvol 00:16:34.493 ************************************ 00:16:34.493 05:05:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:34.751 * Looking for test storage... 00:16:34.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.751 05:05:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.752 05:05:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.653 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.654 05:05:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:16:36.654 00:16:36.654 --- 10.0.0.2 ping statistics --- 00:16:36.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.654 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:36.654 00:16:36.654 --- 10.0.0.1 ping statistics --- 00:16:36.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.654 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=668867 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 668867 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 668867 ']' 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.654 05:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.912 [2024-07-13 05:05:43.172175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:36.912 [2024-07-13 05:05:43.172314] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.912 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.912 [2024-07-13 05:05:43.312863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.170 [2024-07-13 05:05:43.570003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.170 [2024-07-13 05:05:43.570069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.170 [2024-07-13 05:05:43.570126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.170 [2024-07-13 05:05:43.570160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.170 [2024-07-13 05:05:43.570179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.170 [2024-07-13 05:05:43.570347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.170 [2024-07-13 05:05:43.570390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.170 [2024-07-13 05:05:43.570400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.735 05:05:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.993 [2024-07-13 05:05:44.420118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.993 05:05:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:38.558 05:05:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:38.558 05:05:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:38.815 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:38.815 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:39.073 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:39.332 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2a321de8-e57c-4988-b8ea-ae8a04c6903f 00:16:39.332 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a321de8-e57c-4988-b8ea-ae8a04c6903f lvol 20 00:16:39.589 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8ac5eee0-617c-48c6-8760-5f95d05ef3df 00:16:39.590 05:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:39.848 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ac5eee0-617c-48c6-8760-5f95d05ef3df 00:16:40.105 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:40.105 [2024-07-13 05:05:46.599655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.363 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:40.363 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=669300 00:16:40.363 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:40.363 05:05:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:40.621 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.556 05:05:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8ac5eee0-617c-48c6-8760-5f95d05ef3df MY_SNAPSHOT 00:16:41.814 05:05:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1482f1af-11fc-4717-baf2-c77ebf8f2b32 00:16:41.814 05:05:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8ac5eee0-617c-48c6-8760-5f95d05ef3df 30 00:16:42.072 05:05:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1482f1af-11fc-4717-baf2-c77ebf8f2b32 MY_CLONE 00:16:42.331 05:05:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=80e74a0c-0cfd-4626-84c0-e4e77afa7502 00:16:42.331 05:05:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 80e74a0c-0cfd-4626-84c0-e4e77afa7502 00:16:43.264 05:05:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 669300 00:16:51.493 Initializing NVMe Controllers 00:16:51.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:51.493 Controller IO queue size 128, less than required. 00:16:51.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:51.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:51.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:51.493 Initialization complete. Launching workers. 00:16:51.493 ======================================================== 00:16:51.493 Latency(us) 00:16:51.493 Device Information : IOPS MiB/s Average min max 00:16:51.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8281.10 32.35 15459.87 487.41 172552.03 00:16:51.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8145.60 31.82 15728.91 3394.91 185071.30 00:16:51.493 ======================================================== 00:16:51.493 Total : 16426.70 64.17 15593.28 487.41 185071.30 00:16:51.493 00:16:51.493 05:05:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:51.493 05:05:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ac5eee0-617c-48c6-8760-5f95d05ef3df 00:16:51.751 05:05:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a321de8-e57c-4988-b8ea-ae8a04c6903f 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.009 rmmod nvme_tcp 00:16:52.009 rmmod nvme_fabrics 00:16:52.009 rmmod nvme_keyring 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 668867 ']' 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 668867 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 668867 ']' 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 668867 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668867 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668867' 00:16:52.009 killing process with pid 668867 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 668867 00:16:52.009 05:05:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 668867 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.910 05:05:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.812 05:06:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:55.812 00:16:55.812 real 0m21.045s 00:16:55.812 user 1m10.225s 00:16:55.812 sys 0m5.483s 00:16:55.812 05:06:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.812 05:06:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 ************************************ 00:16:55.812 END TEST nvmf_lvol 00:16:55.812 ************************************ 00:16:55.812 05:06:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:55.812 05:06:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:55.812 05:06:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.812 05:06:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.812 05:06:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 ************************************ 00:16:55.812 START TEST nvmf_lvs_grow 00:16:55.812 ************************************ 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:55.812 * Looking for test storage... 00:16:55.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.812 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:55.813 05:06:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:57.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:57.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:57.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:57.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:16:57.712 00:16:57.712 --- 10.0.0.2 ping statistics --- 00:16:57.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.712 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:57.712 00:16:57.712 --- 10.0.0.1 ping statistics --- 00:16:57.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.712 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:57.712 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=672806 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 672806 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 672806 ']' 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.713 05:06:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:57.971 [2024-07-13 05:06:04.279910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:57.971 [2024-07-13 05:06:04.280068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.971 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.971 [2024-07-13 05:06:04.430368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.229 [2024-07-13 05:06:04.677237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.229 [2024-07-13 05:06:04.677314] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.229 [2024-07-13 05:06:04.677342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.229 [2024-07-13 05:06:04.677363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.229 [2024-07-13 05:06:04.677383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.229 [2024-07-13 05:06:04.677427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.796 05:06:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:59.053 [2024-07-13 05:06:05.527125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.053 05:06:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:59.053 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:59.053 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.053 05:06:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.311 ************************************ 00:16:59.311 START TEST lvs_grow_clean 00:16:59.311 ************************************ 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.311 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:59.570 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:59.570 05:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:59.827 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:16:59.827 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:16:59.827 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:00.085 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:00.085 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:00.085 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d lvol 150 00:17:00.343 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4e28fb4-b2da-4196-b130-57fae4a04aa8 00:17:00.343 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:00.343 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:00.602 [2024-07-13 05:06:06.920976] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:00.602 [2024-07-13 05:06:06.921103] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:00.602 true 00:17:00.602 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:00.602 05:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:00.860 05:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:00.860 05:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:01.118 05:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4e28fb4-b2da-4196-b130-57fae4a04aa8 00:17:01.376 05:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:01.635 [2024-07-13 05:06:07.932240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.635 05:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=673710 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 673710 /var/tmp/bdevperf.sock 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 673710 ']' 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.893 05:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:01.893 [2024-07-13 05:06:08.296346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:01.893 [2024-07-13 05:06:08.296506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673710 ] 00:17:01.893 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.152 [2024-07-13 05:06:08.423257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.411 [2024-07-13 05:06:08.676381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.978 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.978 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:02.978 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:03.236 Nvme0n1 00:17:03.236 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:03.494 [ 00:17:03.495 { 00:17:03.495 "name": "Nvme0n1", 00:17:03.495 "aliases": [ 00:17:03.495 "a4e28fb4-b2da-4196-b130-57fae4a04aa8" 00:17:03.495 ], 00:17:03.495 "product_name": "NVMe disk", 00:17:03.495 "block_size": 4096, 00:17:03.495 "num_blocks": 38912, 00:17:03.495 "uuid": "a4e28fb4-b2da-4196-b130-57fae4a04aa8", 00:17:03.495 "assigned_rate_limits": { 00:17:03.495 "rw_ios_per_sec": 0, 00:17:03.495 "rw_mbytes_per_sec": 0, 00:17:03.495 "r_mbytes_per_sec": 0, 00:17:03.495 "w_mbytes_per_sec": 0 00:17:03.495 }, 00:17:03.495 "claimed": false, 00:17:03.495 "zoned": false, 00:17:03.495 "supported_io_types": { 00:17:03.495 "read": true, 00:17:03.495 "write": true, 00:17:03.495 "unmap": true, 00:17:03.495 "flush": true, 00:17:03.495 "reset": true, 00:17:03.495 "nvme_admin": true, 00:17:03.495 "nvme_io": true, 00:17:03.495 "nvme_io_md": false, 00:17:03.495 "write_zeroes": true, 00:17:03.495 "zcopy": false, 00:17:03.495 "get_zone_info": false, 00:17:03.495 "zone_management": false, 00:17:03.495 "zone_append": false, 00:17:03.495 "compare": true, 00:17:03.495 "compare_and_write": true, 00:17:03.495 "abort": true, 00:17:03.495 "seek_hole": false, 00:17:03.495 "seek_data": false, 00:17:03.495 "copy": true, 00:17:03.495 "nvme_iov_md": false 00:17:03.495 }, 00:17:03.495 "memory_domains": [ 00:17:03.495 { 00:17:03.495 "dma_device_id": "system", 00:17:03.495 "dma_device_type": 1 00:17:03.495 } 00:17:03.495 ], 00:17:03.495 "driver_specific": { 00:17:03.495 "nvme": [ 00:17:03.495 { 00:17:03.495 "trid": { 00:17:03.495 "trtype": "TCP", 00:17:03.495 "adrfam": "IPv4", 00:17:03.495 "traddr": "10.0.0.2", 00:17:03.495 "trsvcid": "4420", 00:17:03.495 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:03.495 }, 00:17:03.495 "ctrlr_data": { 00:17:03.495 "cntlid": 1, 00:17:03.495 "vendor_id": "0x8086", 00:17:03.495 "model_number": "SPDK bdev Controller", 00:17:03.495 "serial_number": "SPDK0", 00:17:03.495 "firmware_revision": "24.09", 00:17:03.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:03.495 "oacs": { 00:17:03.495 "security": 0, 00:17:03.495 "format": 0, 00:17:03.495 "firmware": 0, 00:17:03.495 "ns_manage": 0 00:17:03.495 }, 00:17:03.495 "multi_ctrlr": true, 00:17:03.495 "ana_reporting": false 00:17:03.495 }, 00:17:03.495 "vs": { 00:17:03.495 "nvme_version": "1.3" 00:17:03.495 }, 00:17:03.495 "ns_data": { 00:17:03.495 "id": 1, 00:17:03.495 "can_share": true 00:17:03.495 } 00:17:03.495 } 00:17:03.495 ], 00:17:03.495 "mp_policy": "active_passive" 00:17:03.495 } 00:17:03.495 } 00:17:03.495 ] 00:17:03.495 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=674094 00:17:03.495 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.495 05:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:03.754 Running I/O for 10 seconds... 00:17:04.700 Latency(us) 00:17:04.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.700 Nvme0n1 : 1.00 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:04.700 =================================================================================================================== 00:17:04.700 Total : 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:04.700 00:17:05.691 05:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:05.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.691 Nvme0n1 : 2.00 10987.00 42.92 0.00 0.00 0.00 0.00 0.00 00:17:05.691 =================================================================================================================== 00:17:05.691 Total : 10987.00 42.92 0.00 0.00 0.00 0.00 0.00 00:17:05.691 00:17:05.950 true 00:17:05.950 05:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:05.950 05:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:06.208 05:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:06.208 05:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:06.208 05:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 674094 00:17:06.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.775 Nvme0n1 : 3.00 11071.00 43.25 0.00 0.00 0.00 0.00 0.00 00:17:06.775 =================================================================================================================== 00:17:06.775 Total : 11071.00 43.25 0.00 0.00 0.00 0.00 0.00 00:17:06.775 00:17:07.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.710 Nvme0n1 : 4.00 11129.00 43.47 0.00 0.00 0.00 0.00 0.00 00:17:07.710 =================================================================================================================== 00:17:07.710 Total : 11129.00 43.47 0.00 0.00 0.00 0.00 0.00 00:17:07.710 00:17:08.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.646 Nvme0n1 : 5.00 11240.00 43.91 0.00 0.00 0.00 0.00 0.00 00:17:08.646 =================================================================================================================== 00:17:08.646 Total : 11240.00 43.91 0.00 0.00 0.00 0.00 0.00 00:17:08.646 00:17:09.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.583 Nvme0n1 : 6.00 11229.33 43.86 0.00 0.00 0.00 0.00 0.00 00:17:09.583 =================================================================================================================== 00:17:09.583 Total : 11229.33 43.86 0.00 0.00 0.00 0.00 0.00 00:17:09.583 00:17:10.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.958 Nvme0n1 : 7.00 11294.29 44.12 0.00 0.00 0.00 0.00 0.00 00:17:10.958 =================================================================================================================== 00:17:10.958 Total : 11294.29 44.12 0.00 0.00 0.00 0.00 0.00 00:17:10.958 00:17:11.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.891 Nvme0n1 : 8.00 11319.38 44.22 0.00 0.00 0.00 0.00 0.00 00:17:11.891 =================================================================================================================== 00:17:11.891 Total : 11319.38 44.22 0.00 0.00 0.00 0.00 0.00 00:17:11.891 00:17:12.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.825 Nvme0n1 : 9.00 11345.78 44.32 0.00 0.00 0.00 0.00 0.00 00:17:12.825 =================================================================================================================== 00:17:12.825 Total : 11345.78 44.32 0.00 0.00 0.00 0.00 0.00 00:17:12.825 00:17:13.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.760 Nvme0n1 : 10.00 11373.30 44.43 0.00 0.00 0.00 0.00 0.00 00:17:13.760 =================================================================================================================== 00:17:13.760 Total : 11373.30 44.43 0.00 0.00 0.00 0.00 0.00 00:17:13.760 00:17:13.760 00:17:13.760 Latency(us) 00:17:13.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.760 Nvme0n1 : 10.01 11373.45 44.43 0.00 0.00 11246.68 6941.96 22136.60 00:17:13.760 =================================================================================================================== 00:17:13.760 Total : 11373.45 44.43 0.00 0.00 11246.68 6941.96 22136.60 00:17:13.760 0 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 673710 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 673710 ']' 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 673710 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 673710 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 673710' 00:17:13.760 killing process with pid 673710 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 673710 00:17:13.760 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.760 00:17:13.760 Latency(us) 00:17:13.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.760 =================================================================================================================== 00:17:13.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.760 05:06:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 673710 00:17:14.693 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.951 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:15.516 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:15.516 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:15.516 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:15.516 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:15.516 05:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:15.774 [2024-07-13 05:06:22.231908] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:15.775 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:16.340 request: 00:17:16.340 { 00:17:16.340 "uuid": "0244345c-e3f1-4b82-bde6-c5d33b7efe3d", 00:17:16.340 "method": "bdev_lvol_get_lvstores", 00:17:16.340 "req_id": 1 00:17:16.340 } 00:17:16.340 Got JSON-RPC error response 00:17:16.340 response: 00:17:16.340 { 00:17:16.340 "code": -19, 00:17:16.340 "message": "No such device" 00:17:16.340 } 00:17:16.340 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:16.340 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.340 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.340 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.341 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:16.598 aio_bdev 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4e28fb4-b2da-4196-b130-57fae4a04aa8 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=a4e28fb4-b2da-4196-b130-57fae4a04aa8 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.598 05:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:16.855 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4e28fb4-b2da-4196-b130-57fae4a04aa8 -t 2000 00:17:16.855 [ 00:17:16.855 { 00:17:16.855 "name": "a4e28fb4-b2da-4196-b130-57fae4a04aa8", 00:17:16.855 "aliases": [ 00:17:16.855 "lvs/lvol" 00:17:16.855 ], 00:17:16.855 "product_name": "Logical Volume", 00:17:16.855 "block_size": 4096, 00:17:16.855 "num_blocks": 38912, 00:17:16.855 "uuid": "a4e28fb4-b2da-4196-b130-57fae4a04aa8", 00:17:16.855 "assigned_rate_limits": { 00:17:16.855 "rw_ios_per_sec": 0, 00:17:16.855 "rw_mbytes_per_sec": 0, 00:17:16.855 "r_mbytes_per_sec": 0, 00:17:16.855 "w_mbytes_per_sec": 0 00:17:16.855 }, 00:17:16.855 "claimed": false, 00:17:16.855 "zoned": false, 00:17:16.855 "supported_io_types": { 00:17:16.855 "read": true, 00:17:16.855 "write": true, 00:17:16.855 "unmap": true, 00:17:16.855 "flush": false, 00:17:16.855 "reset": true, 00:17:16.855 "nvme_admin": false, 00:17:16.855 "nvme_io": false, 00:17:16.855 "nvme_io_md": false, 00:17:16.855 "write_zeroes": true, 00:17:16.855 "zcopy": false, 00:17:16.855 "get_zone_info": false, 00:17:16.855 "zone_management": false, 00:17:16.855 "zone_append": false, 00:17:16.855 "compare": false, 00:17:16.855 "compare_and_write": false, 00:17:16.855 "abort": false, 00:17:16.855 "seek_hole": true, 00:17:16.855 "seek_data": true, 00:17:16.855 "copy": false, 00:17:16.855 "nvme_iov_md": false 00:17:16.855 }, 00:17:16.855 "driver_specific": { 00:17:16.855 "lvol": { 00:17:16.855 "lvol_store_uuid": "0244345c-e3f1-4b82-bde6-c5d33b7efe3d", 00:17:16.855 "base_bdev": "aio_bdev", 00:17:16.855 "thin_provision": false, 00:17:16.855 "num_allocated_clusters": 38, 00:17:16.855 "snapshot": false, 00:17:16.855 "clone": false, 00:17:16.855 "esnap_clone": false 00:17:16.855 } 00:17:16.855 } 00:17:16.855 } 00:17:16.855 ] 00:17:16.855 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:16.855 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:16.855 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:17.112 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:17.112 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:17.112 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:17.370 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:17.370 05:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4e28fb4-b2da-4196-b130-57fae4a04aa8 00:17:17.628 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0244345c-e3f1-4b82-bde6-c5d33b7efe3d 00:17:17.885 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:18.143 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.143 00:17:18.143 real 0m19.047s 00:17:18.143 user 0m18.628s 00:17:18.143 sys 0m1.934s 00:17:18.143 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.143 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.143 ************************************ 00:17:18.143 END TEST lvs_grow_clean 00:17:18.143 ************************************ 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.401 ************************************ 00:17:18.401 START TEST lvs_grow_dirty 00:17:18.401 ************************************ 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.401 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.659 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:18.659 05:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:18.917 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9f504691-8739-4f56-a36a-17a605b2c23c 00:17:18.917 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:18.917 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:19.175 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:19.175 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:19.175 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9f504691-8739-4f56-a36a-17a605b2c23c lvol 150 00:17:19.454 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:19.454 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.455 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:19.717 [2024-07-13 05:06:25.955593] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:19.717 [2024-07-13 05:06:25.955718] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:19.717 true 00:17:19.717 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:19.717 05:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:19.974 05:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:19.974 05:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:20.232 05:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:20.489 05:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:20.747 [2024-07-13 05:06:27.014904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.747 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=676176 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 676176 /var/tmp/bdevperf.sock 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 676176 ']' 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.005 05:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:21.005 [2024-07-13 05:06:27.343738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:21.005 [2024-07-13 05:06:27.343899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676176 ] 00:17:21.005 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.005 [2024-07-13 05:06:27.472849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.263 [2024-07-13 05:06:27.726393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.829 05:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.829 05:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:21.829 05:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:22.394 Nvme0n1 00:17:22.394 05:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:22.651 [ 00:17:22.651 { 00:17:22.651 "name": "Nvme0n1", 00:17:22.651 "aliases": [ 00:17:22.651 "caac0aad-ddbc-4452-bc98-cf75ae6e93b6" 00:17:22.651 ], 00:17:22.651 "product_name": "NVMe disk", 00:17:22.651 "block_size": 4096, 00:17:22.651 "num_blocks": 38912, 00:17:22.651 "uuid": "caac0aad-ddbc-4452-bc98-cf75ae6e93b6", 00:17:22.651 "assigned_rate_limits": { 00:17:22.651 "rw_ios_per_sec": 0, 00:17:22.651 "rw_mbytes_per_sec": 0, 00:17:22.651 "r_mbytes_per_sec": 0, 00:17:22.651 "w_mbytes_per_sec": 0 00:17:22.651 }, 00:17:22.651 "claimed": false, 00:17:22.651 "zoned": false, 00:17:22.651 "supported_io_types": { 00:17:22.651 "read": true, 00:17:22.651 "write": true, 00:17:22.651 "unmap": true, 00:17:22.651 "flush": true, 00:17:22.651 "reset": true, 00:17:22.651 "nvme_admin": true, 00:17:22.651 "nvme_io": true, 00:17:22.652 "nvme_io_md": false, 00:17:22.652 "write_zeroes": true, 00:17:22.652 "zcopy": false, 00:17:22.652 "get_zone_info": false, 00:17:22.652 "zone_management": false, 00:17:22.652 "zone_append": false, 00:17:22.652 "compare": true, 00:17:22.652 "compare_and_write": true, 00:17:22.652 "abort": true, 00:17:22.652 "seek_hole": false, 00:17:22.652 "seek_data": false, 00:17:22.652 "copy": true, 00:17:22.652 "nvme_iov_md": false 00:17:22.652 }, 00:17:22.652 "memory_domains": [ 00:17:22.652 { 00:17:22.652 "dma_device_id": "system", 00:17:22.652 "dma_device_type": 1 00:17:22.652 } 00:17:22.652 ], 00:17:22.652 "driver_specific": { 00:17:22.652 "nvme": [ 00:17:22.652 { 00:17:22.652 "trid": { 00:17:22.652 "trtype": "TCP", 00:17:22.652 "adrfam": "IPv4", 00:17:22.652 "traddr": "10.0.0.2", 00:17:22.652 "trsvcid": "4420", 00:17:22.652 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:22.652 }, 00:17:22.652 "ctrlr_data": { 00:17:22.652 "cntlid": 1, 00:17:22.652 "vendor_id": "0x8086", 00:17:22.652 "model_number": "SPDK bdev Controller", 00:17:22.652 "serial_number": "SPDK0", 00:17:22.652 "firmware_revision": "24.09", 00:17:22.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.652 "oacs": { 00:17:22.652 "security": 0, 00:17:22.652 "format": 0, 00:17:22.652 "firmware": 0, 00:17:22.652 "ns_manage": 0 00:17:22.652 }, 00:17:22.652 "multi_ctrlr": true, 00:17:22.652 "ana_reporting": false 00:17:22.652 }, 00:17:22.652 "vs": { 00:17:22.652 "nvme_version": "1.3" 00:17:22.652 }, 00:17:22.652 "ns_data": { 00:17:22.652 "id": 1, 00:17:22.652 "can_share": true 00:17:22.652 } 00:17:22.652 } 00:17:22.652 ], 00:17:22.652 "mp_policy": "active_passive" 00:17:22.652 } 00:17:22.652 } 00:17:22.652 ] 00:17:22.652 05:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=676327 00:17:22.652 05:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:22.652 05:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.652 Running I/O for 10 seconds... 00:17:24.022 Latency(us) 00:17:24.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.022 Nvme0n1 : 1.00 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:24.022 =================================================================================================================== 00:17:24.022 Total : 10860.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:24.022 00:17:24.591 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:24.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.848 Nvme0n1 : 2.00 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:24.848 =================================================================================================================== 00:17:24.848 Total : 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:17:24.848 00:17:24.848 true 00:17:24.848 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:24.848 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.106 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.106 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.106 05:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 676327 00:17:25.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.671 Nvme0n1 : 3.00 10943.67 42.75 0.00 0.00 0.00 0.00 0.00 00:17:25.671 =================================================================================================================== 00:17:25.672 Total : 10943.67 42.75 0.00 0.00 0.00 0.00 0.00 00:17:25.672 00:17:27.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.046 Nvme0n1 : 4.00 11017.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:27.046 =================================================================================================================== 00:17:27.046 Total : 11017.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:27.046 00:17:27.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.983 Nvme0n1 : 5.00 11023.80 43.06 0.00 0.00 0.00 0.00 0.00 00:17:27.983 =================================================================================================================== 00:17:27.983 Total : 11023.80 43.06 0.00 0.00 0.00 0.00 0.00 00:17:27.983 00:17:28.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.921 Nvme0n1 : 6.00 11028.00 43.08 0.00 0.00 0.00 0.00 0.00 00:17:28.921 =================================================================================================================== 00:17:28.921 Total : 11028.00 43.08 0.00 0.00 0.00 0.00 0.00 00:17:28.921 00:17:29.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.857 Nvme0n1 : 7.00 11015.29 43.03 0.00 0.00 0.00 0.00 0.00 00:17:29.857 =================================================================================================================== 00:17:29.857 Total : 11015.29 43.03 0.00 0.00 0.00 0.00 0.00 00:17:29.857 00:17:30.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.793 Nvme0n1 : 8.00 11027.50 43.08 0.00 0.00 0.00 0.00 0.00 00:17:30.793 =================================================================================================================== 00:17:30.793 Total : 11027.50 43.08 0.00 0.00 0.00 0.00 0.00 00:17:30.793 00:17:31.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.728 Nvme0n1 : 9.00 11036.89 43.11 0.00 0.00 0.00 0.00 0.00 00:17:31.728 =================================================================================================================== 00:17:31.728 Total : 11036.89 43.11 0.00 0.00 0.00 0.00 0.00 00:17:31.728 00:17:32.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.664 Nvme0n1 : 10.00 11038.10 43.12 0.00 0.00 0.00 0.00 0.00 00:17:32.664 =================================================================================================================== 00:17:32.664 Total : 11038.10 43.12 0.00 0.00 0.00 0.00 0.00 00:17:32.664 00:17:32.922 00:17:32.922 Latency(us) 00:17:32.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.922 Nvme0n1 : 10.01 11038.43 43.12 0.00 0.00 11588.62 5655.51 23107.51 00:17:32.922 =================================================================================================================== 00:17:32.923 Total : 11038.43 43.12 0.00 0.00 11588.62 5655.51 23107.51 00:17:32.923 0 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 676176 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 676176 ']' 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 676176 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 676176 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 676176' 00:17:32.923 killing process with pid 676176 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 676176 00:17:32.923 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.923 00:17:32.923 Latency(us) 00:17:32.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.923 =================================================================================================================== 00:17:32.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.923 05:06:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 676176 00:17:33.859 05:06:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.143 05:06:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:34.407 05:06:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:34.407 05:06:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 672806 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 672806 00:17:34.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 672806 Killed "${NVMF_APP[@]}" "$@" 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=677782 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 677782 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 677782 ']' 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.666 05:06:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:34.924 [2024-07-13 05:06:41.204779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:34.924 [2024-07-13 05:06:41.204945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.924 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.924 [2024-07-13 05:06:41.342648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.182 [2024-07-13 05:06:41.569524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.182 [2024-07-13 05:06:41.569584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.182 [2024-07-13 05:06:41.569624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.182 [2024-07-13 05:06:41.569645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.182 [2024-07-13 05:06:41.569664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.182 [2024-07-13 05:06:41.569702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.747 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.005 [2024-07-13 05:06:42.452239] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:36.005 [2024-07-13 05:06:42.452473] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:36.005 [2024-07-13 05:06:42.452559] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.005 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:36.263 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b caac0aad-ddbc-4452-bc98-cf75ae6e93b6 -t 2000 00:17:36.521 [ 00:17:36.521 { 00:17:36.521 "name": "caac0aad-ddbc-4452-bc98-cf75ae6e93b6", 00:17:36.521 "aliases": [ 00:17:36.521 "lvs/lvol" 00:17:36.521 ], 00:17:36.521 "product_name": "Logical Volume", 00:17:36.521 "block_size": 4096, 00:17:36.521 "num_blocks": 38912, 00:17:36.521 "uuid": "caac0aad-ddbc-4452-bc98-cf75ae6e93b6", 00:17:36.521 "assigned_rate_limits": { 00:17:36.521 "rw_ios_per_sec": 0, 00:17:36.521 "rw_mbytes_per_sec": 0, 00:17:36.521 "r_mbytes_per_sec": 0, 00:17:36.521 "w_mbytes_per_sec": 0 00:17:36.521 }, 00:17:36.521 "claimed": false, 00:17:36.521 "zoned": false, 00:17:36.521 "supported_io_types": { 00:17:36.521 "read": true, 00:17:36.521 "write": true, 00:17:36.521 "unmap": true, 00:17:36.521 "flush": false, 00:17:36.521 "reset": true, 00:17:36.521 "nvme_admin": false, 00:17:36.521 "nvme_io": false, 00:17:36.521 "nvme_io_md": false, 00:17:36.521 "write_zeroes": true, 00:17:36.521 "zcopy": false, 00:17:36.521 "get_zone_info": false, 00:17:36.521 "zone_management": false, 00:17:36.521 "zone_append": false, 00:17:36.521 "compare": false, 00:17:36.521 "compare_and_write": false, 00:17:36.521 "abort": false, 00:17:36.521 "seek_hole": true, 00:17:36.521 "seek_data": true, 00:17:36.521 "copy": false, 00:17:36.521 "nvme_iov_md": false 00:17:36.521 }, 00:17:36.521 "driver_specific": { 00:17:36.521 "lvol": { 00:17:36.521 "lvol_store_uuid": "9f504691-8739-4f56-a36a-17a605b2c23c", 00:17:36.521 "base_bdev": "aio_bdev", 00:17:36.521 "thin_provision": false, 00:17:36.521 "num_allocated_clusters": 38, 00:17:36.521 "snapshot": false, 00:17:36.521 "clone": false, 00:17:36.521 "esnap_clone": false 00:17:36.521 } 00:17:36.521 } 00:17:36.521 } 00:17:36.521 ] 00:17:36.521 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:36.521 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:36.521 05:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:36.779 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:36.779 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:36.780 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:37.037 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:37.037 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:37.295 [2024-07-13 05:06:43.704861] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:37.295 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:37.295 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:37.295 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:37.295 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.295 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:37.296 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:37.553 request: 00:17:37.554 { 00:17:37.554 "uuid": "9f504691-8739-4f56-a36a-17a605b2c23c", 00:17:37.554 "method": "bdev_lvol_get_lvstores", 00:17:37.554 "req_id": 1 00:17:37.554 } 00:17:37.554 Got JSON-RPC error response 00:17:37.554 response: 00:17:37.554 { 00:17:37.554 "code": -19, 00:17:37.554 "message": "No such device" 00:17:37.554 } 00:17:37.554 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:37.554 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.554 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.554 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.554 05:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.812 aio_bdev 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:37.812 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:38.069 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b caac0aad-ddbc-4452-bc98-cf75ae6e93b6 -t 2000 00:17:38.327 [ 00:17:38.327 { 00:17:38.327 "name": "caac0aad-ddbc-4452-bc98-cf75ae6e93b6", 00:17:38.327 "aliases": [ 00:17:38.327 "lvs/lvol" 00:17:38.327 ], 00:17:38.327 "product_name": "Logical Volume", 00:17:38.327 "block_size": 4096, 00:17:38.327 "num_blocks": 38912, 00:17:38.327 "uuid": "caac0aad-ddbc-4452-bc98-cf75ae6e93b6", 00:17:38.327 "assigned_rate_limits": { 00:17:38.327 "rw_ios_per_sec": 0, 00:17:38.327 "rw_mbytes_per_sec": 0, 00:17:38.327 "r_mbytes_per_sec": 0, 00:17:38.327 "w_mbytes_per_sec": 0 00:17:38.327 }, 00:17:38.327 "claimed": false, 00:17:38.327 "zoned": false, 00:17:38.327 "supported_io_types": { 00:17:38.327 "read": true, 00:17:38.327 "write": true, 00:17:38.327 "unmap": true, 00:17:38.327 "flush": false, 00:17:38.327 "reset": true, 00:17:38.327 "nvme_admin": false, 00:17:38.327 "nvme_io": false, 00:17:38.327 "nvme_io_md": false, 00:17:38.327 "write_zeroes": true, 00:17:38.327 "zcopy": false, 00:17:38.327 "get_zone_info": false, 00:17:38.327 "zone_management": false, 00:17:38.327 "zone_append": false, 00:17:38.327 "compare": false, 00:17:38.327 "compare_and_write": false, 00:17:38.327 "abort": false, 00:17:38.327 "seek_hole": true, 00:17:38.327 "seek_data": true, 00:17:38.327 "copy": false, 00:17:38.327 "nvme_iov_md": false 00:17:38.327 }, 00:17:38.327 "driver_specific": { 00:17:38.327 "lvol": { 00:17:38.327 "lvol_store_uuid": "9f504691-8739-4f56-a36a-17a605b2c23c", 00:17:38.327 "base_bdev": "aio_bdev", 00:17:38.327 "thin_provision": false, 00:17:38.327 "num_allocated_clusters": 38, 00:17:38.327 "snapshot": false, 00:17:38.327 "clone": false, 00:17:38.328 "esnap_clone": false 00:17:38.328 } 00:17:38.328 } 00:17:38.328 } 00:17:38.328 ] 00:17:38.328 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:38.328 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:38.328 05:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:38.586 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:38.586 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:38.586 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:39.152 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:39.152 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete caac0aad-ddbc-4452-bc98-cf75ae6e93b6 00:17:39.152 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9f504691-8739-4f56-a36a-17a605b2c23c 00:17:39.718 05:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.718 05:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.975 00:17:39.975 real 0m21.557s 00:17:39.975 user 0m54.434s 00:17:39.975 sys 0m4.708s 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:39.975 ************************************ 00:17:39.975 END TEST lvs_grow_dirty 00:17:39.975 ************************************ 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:39.975 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:39.976 nvmf_trace.0 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.976 rmmod nvme_tcp 00:17:39.976 rmmod nvme_fabrics 00:17:39.976 rmmod nvme_keyring 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 677782 ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 677782 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 677782 ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 677782 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 677782 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 677782' 00:17:39.976 killing process with pid 677782 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 677782 00:17:39.976 05:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 677782 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.353 05:06:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.255 05:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.255 00:17:43.255 real 0m47.684s 00:17:43.255 user 1m20.806s 00:17:43.255 sys 0m8.634s 00:17:43.255 05:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.255 05:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:43.255 ************************************ 00:17:43.255 END TEST nvmf_lvs_grow 00:17:43.255 ************************************ 00:17:43.255 05:06:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.255 05:06:49 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:43.255 05:06:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.255 05:06:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.256 05:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.514 ************************************ 00:17:43.514 START TEST nvmf_bdev_io_wait 00:17:43.514 ************************************ 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:43.514 * Looking for test storage... 00:17:43.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.514 05:06:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:45.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:45.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:45.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:45.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.416 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:45.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:17:45.677 00:17:45.677 --- 10.0.0.2 ping statistics --- 00:17:45.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.677 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:45.677 00:17:45.677 --- 10.0.0.1 ping statistics --- 00:17:45.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.677 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=680561 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 680561 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 680561 ']' 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.677 05:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:45.677 [2024-07-13 05:06:52.079402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:45.677 [2024-07-13 05:06:52.079558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.677 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.937 [2024-07-13 05:06:52.217049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.195 [2024-07-13 05:06:52.478520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.195 [2024-07-13 05:06:52.478598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.195 [2024-07-13 05:06:52.478626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.195 [2024-07-13 05:06:52.478647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.195 [2024-07-13 05:06:52.478668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.195 [2024-07-13 05:06:52.478784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.195 [2024-07-13 05:06:52.478857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.195 [2024-07-13 05:06:52.478954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.195 [2024-07-13 05:06:52.478962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.763 05:06:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.763 05:06:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:46.763 05:06:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.763 05:06:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.763 05:06:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.763 [2024-07-13 05:06:53.248645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.763 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.023 Malloc0 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.023 [2024-07-13 05:06:53.365946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=680718 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=680720 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=680722 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.023 { 00:17:47.023 "params": { 00:17:47.023 "name": "Nvme$subsystem", 00:17:47.023 "trtype": "$TEST_TRANSPORT", 00:17:47.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.023 "adrfam": "ipv4", 00:17:47.023 "trsvcid": "$NVMF_PORT", 00:17:47.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.023 "hdgst": ${hdgst:-false}, 00:17:47.023 "ddgst": ${ddgst:-false} 00:17:47.023 }, 00:17:47.023 "method": "bdev_nvme_attach_controller" 00:17:47.023 } 00:17:47.023 EOF 00:17:47.023 )") 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=680724 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.023 { 00:17:47.023 "params": { 00:17:47.023 "name": "Nvme$subsystem", 00:17:47.023 "trtype": "$TEST_TRANSPORT", 00:17:47.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.023 "adrfam": "ipv4", 00:17:47.023 "trsvcid": "$NVMF_PORT", 00:17:47.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.023 "hdgst": ${hdgst:-false}, 00:17:47.023 "ddgst": ${ddgst:-false} 00:17:47.023 }, 00:17:47.023 "method": "bdev_nvme_attach_controller" 00:17:47.023 } 00:17:47.023 EOF 00:17:47.023 )") 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.023 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.023 { 00:17:47.023 "params": { 00:17:47.023 "name": "Nvme$subsystem", 00:17:47.023 "trtype": "$TEST_TRANSPORT", 00:17:47.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "$NVMF_PORT", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.024 "hdgst": ${hdgst:-false}, 00:17:47.024 "ddgst": ${ddgst:-false} 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 } 00:17:47.024 EOF 00:17:47.024 )") 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.024 { 00:17:47.024 "params": { 00:17:47.024 "name": "Nvme$subsystem", 00:17:47.024 "trtype": "$TEST_TRANSPORT", 00:17:47.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "$NVMF_PORT", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.024 "hdgst": ${hdgst:-false}, 00:17:47.024 "ddgst": ${ddgst:-false} 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 } 00:17:47.024 EOF 00:17:47.024 )") 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 680718 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.024 "params": { 00:17:47.024 "name": "Nvme1", 00:17:47.024 "trtype": "tcp", 00:17:47.024 "traddr": "10.0.0.2", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "4420", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.024 "hdgst": false, 00:17:47.024 "ddgst": false 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 }' 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.024 "params": { 00:17:47.024 "name": "Nvme1", 00:17:47.024 "trtype": "tcp", 00:17:47.024 "traddr": "10.0.0.2", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "4420", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.024 "hdgst": false, 00:17:47.024 "ddgst": false 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 }' 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.024 "params": { 00:17:47.024 "name": "Nvme1", 00:17:47.024 "trtype": "tcp", 00:17:47.024 "traddr": "10.0.0.2", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "4420", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.024 "hdgst": false, 00:17:47.024 "ddgst": false 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 }' 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.024 05:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.024 "params": { 00:17:47.024 "name": "Nvme1", 00:17:47.024 "trtype": "tcp", 00:17:47.024 "traddr": "10.0.0.2", 00:17:47.024 "adrfam": "ipv4", 00:17:47.024 "trsvcid": "4420", 00:17:47.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.024 "hdgst": false, 00:17:47.024 "ddgst": false 00:17:47.024 }, 00:17:47.024 "method": "bdev_nvme_attach_controller" 00:17:47.024 }' 00:17:47.024 [2024-07-13 05:06:53.450013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:47.024 [2024-07-13 05:06:53.450016] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:47.024 [2024-07-13 05:06:53.450010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:47.024 [2024-07-13 05:06:53.450158] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 05:06:53.450160] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-07-13 05:06:53.450161] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:47.024 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:47.024 --proc-type=auto ] 00:17:47.024 [2024-07-13 05:06:53.451786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:47.024 [2024-07-13 05:06:53.451949] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:47.282 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.282 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.282 [2024-07-13 05:06:53.688063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.282 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.541 [2024-07-13 05:06:53.793169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.541 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.541 [2024-07-13 05:06:53.892650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.541 [2024-07-13 05:06:53.912693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:47.541 [2024-07-13 05:06:53.969160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.541 [2024-07-13 05:06:54.018789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.799 [2024-07-13 05:06:54.116756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:47.799 [2024-07-13 05:06:54.184557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:48.057 Running I/O for 1 seconds... 00:17:48.057 Running I/O for 1 seconds... 00:17:48.346 Running I/O for 1 seconds... 00:17:48.346 Running I/O for 1 seconds... 00:17:49.284 00:17:49.284 Latency(us) 00:17:49.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.284 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:49.284 Nvme1n1 : 1.01 8744.87 34.16 0.00 0.00 14561.96 3980.71 22233.69 00:17:49.284 =================================================================================================================== 00:17:49.284 Total : 8744.87 34.16 0.00 0.00 14561.96 3980.71 22233.69 00:17:49.284 00:17:49.284 Latency(us) 00:17:49.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.284 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:49.284 Nvme1n1 : 1.00 150331.27 587.23 0.00 0.00 848.32 347.40 1189.36 00:17:49.284 =================================================================================================================== 00:17:49.284 Total : 150331.27 587.23 0.00 0.00 848.32 347.40 1189.36 00:17:49.284 00:17:49.284 Latency(us) 00:17:49.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.284 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:49.284 Nvme1n1 : 1.01 5826.85 22.76 0.00 0.00 21800.59 6553.60 27962.03 00:17:49.284 =================================================================================================================== 00:17:49.284 Total : 5826.85 22.76 0.00 0.00 21800.59 6553.60 27962.03 00:17:49.284 00:17:49.284 Latency(us) 00:17:49.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.284 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:49.284 Nvme1n1 : 1.01 6414.96 25.06 0.00 0.00 19843.33 5873.97 32428.18 00:17:49.284 =================================================================================================================== 00:17:49.284 Total : 6414.96 25.06 0.00 0.00 19843.33 5873.97 32428.18 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 680720 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 680722 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 680724 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.220 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.481 rmmod nvme_tcp 00:17:50.481 rmmod nvme_fabrics 00:17:50.481 rmmod nvme_keyring 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 680561 ']' 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 680561 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 680561 ']' 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 680561 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 680561 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 680561' 00:17:50.481 killing process with pid 680561 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 680561 00:17:50.481 05:06:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 680561 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.862 05:06:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.768 05:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.768 00:17:53.768 real 0m10.227s 00:17:53.768 user 0m30.668s 00:17:53.768 sys 0m4.387s 00:17:53.768 05:06:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:53.768 05:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:53.768 ************************************ 00:17:53.768 END TEST nvmf_bdev_io_wait 00:17:53.768 ************************************ 00:17:53.768 05:07:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:53.768 05:07:00 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:53.768 05:07:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:53.768 05:07:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.768 05:07:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:53.768 ************************************ 00:17:53.768 START TEST nvmf_queue_depth 00:17:53.768 ************************************ 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:53.768 * Looking for test storage... 00:17:53.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.768 05:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.675 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.676 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.676 05:07:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:17:55.676 00:17:55.676 --- 10.0.0.2 ping statistics --- 00:17:55.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.676 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:17:55.676 00:17:55.676 --- 10.0.0.1 ping statistics --- 00:17:55.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.676 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=683155 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 683155 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 683155 ']' 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.676 05:07:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:55.937 [2024-07-13 05:07:02.186068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:55.937 [2024-07-13 05:07:02.186208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.937 [2024-07-13 05:07:02.326710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.196 [2024-07-13 05:07:02.585130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.196 [2024-07-13 05:07:02.585204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.196 [2024-07-13 05:07:02.585233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.196 [2024-07-13 05:07:02.585257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.196 [2024-07-13 05:07:02.585279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.196 [2024-07-13 05:07:02.585329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 [2024-07-13 05:07:03.109804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 Malloc0 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 [2024-07-13 05:07:03.236421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=683331 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 683331 /var/tmp/bdevperf.sock 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 683331 ']' 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.770 05:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.030 [2024-07-13 05:07:03.317775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:57.030 [2024-07-13 05:07:03.317947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683331 ] 00:17:57.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.030 [2024-07-13 05:07:03.446430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.289 [2024-07-13 05:07:03.699764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.856 05:07:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.856 05:07:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:57.856 05:07:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.856 05:07:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.856 05:07:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.115 NVMe0n1 00:17:58.115 05:07:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.115 05:07:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.374 Running I/O for 10 seconds... 00:18:08.361 00:18:08.362 Latency(us) 00:18:08.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.362 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:08.362 Verification LBA range: start 0x0 length 0x4000 00:18:08.362 NVMe0n1 : 10.11 6175.15 24.12 0.00 0.00 164992.18 25437.68 108741.21 00:18:08.362 =================================================================================================================== 00:18:08.362 Total : 6175.15 24.12 0.00 0.00 164992.18 25437.68 108741.21 00:18:08.362 0 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 683331 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 683331 ']' 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 683331 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 683331 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 683331' 00:18:08.362 killing process with pid 683331 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 683331 00:18:08.362 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.362 00:18:08.362 Latency(us) 00:18:08.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.362 =================================================================================================================== 00:18:08.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.362 05:07:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 683331 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.757 rmmod nvme_tcp 00:18:09.757 rmmod nvme_fabrics 00:18:09.757 rmmod nvme_keyring 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 683155 ']' 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 683155 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 683155 ']' 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 683155 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 683155 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 683155' 00:18:09.757 killing process with pid 683155 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 683155 00:18:09.757 05:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 683155 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.178 05:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.087 05:07:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:13.087 00:18:13.087 real 0m19.398s 00:18:13.087 user 0m27.942s 00:18:13.087 sys 0m3.060s 00:18:13.087 05:07:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.087 05:07:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.087 ************************************ 00:18:13.087 END TEST nvmf_queue_depth 00:18:13.087 ************************************ 00:18:13.087 05:07:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.087 05:07:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.087 05:07:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:13.087 05:07:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.087 05:07:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.087 ************************************ 00:18:13.087 START TEST nvmf_target_multipath 00:18:13.087 ************************************ 00:18:13.087 05:07:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.087 * Looking for test storage... 00:18:13.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.088 05:07:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:18:15.625 00:18:15.625 --- 10.0.0.2 ping statistics --- 00:18:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.625 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:18:15.625 00:18:15.625 --- 10.0.0.1 ping statistics --- 00:18:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.625 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.625 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:15.626 only one NIC for nvmf test 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.626 rmmod nvme_tcp 00:18:15.626 rmmod nvme_fabrics 00:18:15.626 rmmod nvme_keyring 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.626 05:07:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.535 00:18:17.535 real 0m4.288s 00:18:17.535 user 0m0.790s 00:18:17.535 sys 0m1.490s 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.535 05:07:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 ************************************ 00:18:17.535 END TEST nvmf_target_multipath 00:18:17.535 ************************************ 00:18:17.535 05:07:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:17.535 05:07:23 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.535 05:07:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.535 05:07:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.535 05:07:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 ************************************ 00:18:17.535 START TEST nvmf_zcopy 00:18:17.535 ************************************ 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.535 * Looking for test storage... 00:18:17.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.535 05:07:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:19.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:19.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:19.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:19.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:19.456 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:18:19.457 00:18:19.457 --- 10.0.0.2 ping statistics --- 00:18:19.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.457 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:19.457 00:18:19.457 --- 10.0.0.1 ping statistics --- 00:18:19.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.457 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=688672 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 688672 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 688672 ']' 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.457 05:07:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.717 [2024-07-13 05:07:25.974753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:19.717 [2024-07-13 05:07:25.974919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.717 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.717 [2024-07-13 05:07:26.111562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.976 [2024-07-13 05:07:26.367715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.976 [2024-07-13 05:07:26.367784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.976 [2024-07-13 05:07:26.367814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.976 [2024-07-13 05:07:26.367841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.976 [2024-07-13 05:07:26.367863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.976 [2024-07-13 05:07:26.367924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 [2024-07-13 05:07:26.953315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 [2024-07-13 05:07:26.969514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 malloc0 00:18:20.543 05:07:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.543 05:07:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.543 05:07:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.543 05:07:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.804 { 00:18:20.804 "params": { 00:18:20.804 "name": "Nvme$subsystem", 00:18:20.804 "trtype": "$TEST_TRANSPORT", 00:18:20.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.804 "adrfam": "ipv4", 00:18:20.804 "trsvcid": "$NVMF_PORT", 00:18:20.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.804 "hdgst": ${hdgst:-false}, 00:18:20.804 "ddgst": ${ddgst:-false} 00:18:20.804 }, 00:18:20.804 "method": "bdev_nvme_attach_controller" 00:18:20.804 } 00:18:20.804 EOF 00:18:20.804 )") 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:20.804 05:07:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.804 "params": { 00:18:20.804 "name": "Nvme1", 00:18:20.804 "trtype": "tcp", 00:18:20.804 "traddr": "10.0.0.2", 00:18:20.804 "adrfam": "ipv4", 00:18:20.804 "trsvcid": "4420", 00:18:20.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.804 "hdgst": false, 00:18:20.804 "ddgst": false 00:18:20.804 }, 00:18:20.804 "method": "bdev_nvme_attach_controller" 00:18:20.804 }' 00:18:20.804 [2024-07-13 05:07:27.134298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.804 [2024-07-13 05:07:27.134452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688832 ] 00:18:20.804 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.804 [2024-07-13 05:07:27.281716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.065 [2024-07-13 05:07:27.534563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.636 Running I/O for 10 seconds... 00:18:31.620 00:18:31.620 Latency(us) 00:18:31.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:31.621 Verification LBA range: start 0x0 length 0x1000 00:18:31.621 Nvme1n1 : 10.02 4450.35 34.77 0.00 0.00 28684.43 5000.15 38059.43 00:18:31.621 =================================================================================================================== 00:18:31.621 Total : 4450.35 34.77 0.00 0.00 28684.43 5000.15 38059.43 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=690262 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.006 { 00:18:33.006 "params": { 00:18:33.006 "name": "Nvme$subsystem", 00:18:33.006 "trtype": "$TEST_TRANSPORT", 00:18:33.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.006 "adrfam": "ipv4", 00:18:33.006 "trsvcid": "$NVMF_PORT", 00:18:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.006 "hdgst": ${hdgst:-false}, 00:18:33.006 "ddgst": ${ddgst:-false} 00:18:33.006 }, 00:18:33.006 "method": "bdev_nvme_attach_controller" 00:18:33.006 } 00:18:33.006 EOF 00:18:33.006 )") 00:18:33.006 [2024-07-13 05:07:39.077486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.077537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:33.006 05:07:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:33.006 "params": { 00:18:33.006 "name": "Nvme1", 00:18:33.006 "trtype": "tcp", 00:18:33.006 "traddr": "10.0.0.2", 00:18:33.006 "adrfam": "ipv4", 00:18:33.006 "trsvcid": "4420", 00:18:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.006 "hdgst": false, 00:18:33.006 "ddgst": false 00:18:33.006 }, 00:18:33.006 "method": "bdev_nvme_attach_controller" 00:18:33.006 }' 00:18:33.006 [2024-07-13 05:07:39.085420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.085450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.093480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.093516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.101492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.101526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.109496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.109537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.117542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.117575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.125561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.125594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.133568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.133601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.141616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.141646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.149594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.149622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.157637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.157665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.163484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:33.006 [2024-07-13 05:07:39.163652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690262 ] 00:18:33.006 [2024-07-13 05:07:39.165681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.165715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.173699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.173732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.181724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.181757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.189744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.189777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.197752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.197787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.205797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.205830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.213796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.213828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.221835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.221876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.229860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.229901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.237896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.237942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.245937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.245970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.006 [2024-07-13 05:07:39.253945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.253981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.261953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.261987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.269996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.270025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.277999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.278028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.286035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.286065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.294040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.294069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.302048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.302075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.310085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.310113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.312036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.006 [2024-07-13 05:07:39.318106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.318135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.326183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.326248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.334200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.334235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.342194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.342227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.350226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.350259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.358250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.358283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.366262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.366295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.374297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.374330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.006 [2024-07-13 05:07:39.382321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.006 [2024-07-13 05:07:39.382354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.390328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.390368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.398366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.398399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.406378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.406411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.414422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.414455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.422447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.422480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.430482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.430515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.438487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.438520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.446538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.446579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.454573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.454623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.462571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.462603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.470567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.470599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.478608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.478641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.486628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.486661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.007 [2024-07-13 05:07:39.494629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.007 [2024-07-13 05:07:39.494662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.502683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.502720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.510700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.510735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.518698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.518732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.526759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.526793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.534785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.534819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.542784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.542825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.550807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.550840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.558813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.558846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.566862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.566917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.573120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.318 [2024-07-13 05:07:39.574888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.574931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.582893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.582940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.591002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.591048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.599023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.599071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.606994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.318 [2024-07-13 05:07:39.607024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.318 [2024-07-13 05:07:39.615006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.615035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.623028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.623055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.631037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.631065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.639056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.639083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.647065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.647092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.655109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.655136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.663135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.663191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.671254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.671308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.679276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.679330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.687278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.687333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.695336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.695402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.703281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.703315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.711286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.711319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.719338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.719372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.727317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.727353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.735371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.735405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.743395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.743428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.751400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.751433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.759446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.759480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.767461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.767504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.775469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.775501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.319 [2024-07-13 05:07:39.783511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.319 [2024-07-13 05:07:39.783544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.791512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.791545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.799549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.799581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.807574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.807607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.815605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.815649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.823703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.823750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.831722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.831775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.839714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.839773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.847726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.847759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.855701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.855734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.863743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.863777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.871770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.871803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.879775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.879808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.887810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.887842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.895838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.895879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.903846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.903886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.911950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.911979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.919886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.919931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.927946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.927976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.935961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.935994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.943971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.944004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.952017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.952064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.960039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.960086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.968037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.968068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.976077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.976108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.984083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.984112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:39.992111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:39.992165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.000140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.000189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.008187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.008221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.016239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.016280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.024299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.024336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.032301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.032338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.040358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.040396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.049535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.049575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.056417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.056454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 [2024-07-13 05:07:40.064437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.579 [2024-07-13 05:07:40.064472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.579 Running I/O for 5 seconds... 00:18:33.838 [2024-07-13 05:07:40.082219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.838 [2024-07-13 05:07:40.082278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.098176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.098214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.114374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.114417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.130094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.130130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.146475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.146511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.162110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.162151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.177695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.177743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.193459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.193500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.209733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.209788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.226265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.226307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.242422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.242462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.258912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.258952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.275496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.275536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.292314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.292355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.308920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.308972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.839 [2024-07-13 05:07:40.325215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.839 [2024-07-13 05:07:40.325270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.341724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.341766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.358082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.358118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.374581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.374615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.388088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.388125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.404102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.404139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.420458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.097 [2024-07-13 05:07:40.420508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.097 [2024-07-13 05:07:40.436826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.436881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.453178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.453214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.469227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.469284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.485316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.485356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.499447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.499487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.515614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.515656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.530094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.530137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.546642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.546683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.562844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.562894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.578715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.578756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.098 [2024-07-13 05:07:40.595291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.098 [2024-07-13 05:07:40.595332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.611704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.611744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.626103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.626140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.641858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.641923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.654181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.654230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.669423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.669463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.685665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.685705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.701964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.702000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.718321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.718362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.732888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.732943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.356 [2024-07-13 05:07:40.749916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.356 [2024-07-13 05:07:40.749953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.766580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.766621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.783793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.783834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.799627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.799668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.815981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.816034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.832596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.832637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.357 [2024-07-13 05:07:40.849180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.357 [2024-07-13 05:07:40.849222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.865360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.865397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.879155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.879193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.894882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.894933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.911656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.911711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.928301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.928342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.944538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.944580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.960976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.961029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.977545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.977586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:40.993695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:40.993736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.010471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.010507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.027089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.027130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.042846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.042913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.056614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.056655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.072756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.072796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.085665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.085706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.616 [2024-07-13 05:07:41.101772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.616 [2024-07-13 05:07:41.101813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.118168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.118210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.134629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.134670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.150289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.150339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.165055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.165092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.181860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.181923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.198564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.198606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.215101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.215138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.230807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.230848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.246960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.247011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.262921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.262975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.278961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.279013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.295053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.295090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.311623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.311675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.328295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.328335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.345119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.345173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.361577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.361619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.877 [2024-07-13 05:07:41.374713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.877 [2024-07-13 05:07:41.374754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.390829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.390879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.407086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.407123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.421651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.421706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.437739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.437780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.450395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.450436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.467157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.467198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.484339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.484380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.501664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.501705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.518458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.518498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.534627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.534667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.549784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.549820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.566648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.566689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.582683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.582724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.598510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.598566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.612918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.612955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.138 [2024-07-13 05:07:41.629346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.138 [2024-07-13 05:07:41.629387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.644841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.644892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.661030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.661081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.677387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.677428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.693446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.693487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.707132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.707173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.723379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.723423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.740112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.740150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.398 [2024-07-13 05:07:41.756525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.398 [2024-07-13 05:07:41.756568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.769365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.769417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.785698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.785736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.801851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.801899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.815174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.815212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.830567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.830604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.846149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.846202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.859939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.859976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.875606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.875642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.399 [2024-07-13 05:07:41.888259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.399 [2024-07-13 05:07:41.888296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.901859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.901908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.917267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.917305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.932427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.932481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.948051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.948088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.963766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.963804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.979650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.979687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:41.994681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:41.994719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.010511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.010572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.025983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.026021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.041440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.041492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.056935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.056973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.071998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.072038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.086807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.086844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.101860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.101906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.116837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.116883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.131856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.131902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.660 [2024-07-13 05:07:42.147341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.660 [2024-07-13 05:07:42.147392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.162672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.162709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.178432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.178469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.191623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.191660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.207083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.207130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.222105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.222157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.237818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.237855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.250780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.250816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.266453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.266489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.281521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.281558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.297046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.297091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.313081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.313117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.328643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.328680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.344012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.344049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.359191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.359226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.373728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.373763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.388987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.389024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.404260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.404296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.921 [2024-07-13 05:07:42.419797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.921 [2024-07-13 05:07:42.419848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.433768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.433805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.449437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.449475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.465354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.465390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.481275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.481311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.496270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.496306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.511593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.511644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.526239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.526280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.542188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.542228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.558487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.558526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.575389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.575429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.591120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.591182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.605424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.605464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.622415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.622455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.638776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.638817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.655811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.655851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.180 [2024-07-13 05:07:42.671689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.180 [2024-07-13 05:07:42.671729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.686600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.686641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.703732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.703772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.720629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.720670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.737350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.737390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.753758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.753798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.770105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.770156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.786014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.786050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.802219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.802259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.818421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.818462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.832372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.832412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.847414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.847469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.864279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.864334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.881159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.881200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.897883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.897926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.914429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.914469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.441 [2024-07-13 05:07:42.931559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.441 [2024-07-13 05:07:42.931613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:42.947654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:42.947696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:42.962803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:42.962843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:42.978713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:42.978752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:42.992007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:42.992058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.008199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.008253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.022365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.022405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.038346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.038381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.054423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.054463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.070671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.070710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.086358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.086414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.102510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.698 [2024-07-13 05:07:43.102544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.698 [2024-07-13 05:07:43.118346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.118380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.699 [2024-07-13 05:07:43.134823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.134876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.699 [2024-07-13 05:07:43.151494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.151530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.699 [2024-07-13 05:07:43.168008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.168060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.699 [2024-07-13 05:07:43.183752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.183792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.699 [2024-07-13 05:07:43.197081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.699 [2024-07-13 05:07:43.197117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.213736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.213776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.230608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.230650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.243650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.243691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.259804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.259839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.276140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.276180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.292931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.292984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.310168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.310208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.326422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.326457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.341968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.342003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.358676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.358715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.374585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.374620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.389998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.390034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.405588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.405623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.422179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.422219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.438047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.438084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.957 [2024-07-13 05:07:43.454129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.957 [2024-07-13 05:07:43.454165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.470363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.470404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.486094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.486130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.501707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.501747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.516158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.516198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.532530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.532571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.548224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.548266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.564061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.564098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.578154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.578191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.593590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.593625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.609083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.609120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.624505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.624542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.639634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.639671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.654897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.654934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.670337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.670387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.685281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.685316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.700757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.700809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.216 [2024-07-13 05:07:43.715831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.216 [2024-07-13 05:07:43.715893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.731263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.731299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.746459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.746495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.761598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.761635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.776485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.776537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.792795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.792847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.808613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.808649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.821715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.821751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.836860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.836919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.852737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.852773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.868735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.868772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.885061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.885098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.900473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.900510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.915860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.915905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.928657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.928692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.943118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.943155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.958471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.958522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.476 [2024-07-13 05:07:43.974120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.476 [2024-07-13 05:07:43.974157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.735 [2024-07-13 05:07:43.989935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.735 [2024-07-13 05:07:43.989973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.735 [2024-07-13 05:07:44.005767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.735 [2024-07-13 05:07:44.005819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.735 [2024-07-13 05:07:44.021306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.735 [2024-07-13 05:07:44.021357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.735 [2024-07-13 05:07:44.036383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.735 [2024-07-13 05:07:44.036434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.735 [2024-07-13 05:07:44.052385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.052423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.067602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.067645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.082295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.082332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.097238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.097275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.112561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.112598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.127940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.127977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.142918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.142954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.158662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.158699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.173895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.173932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.189262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.189299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.204832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.204890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.736 [2024-07-13 05:07:44.220481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.736 [2024-07-13 05:07:44.220534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.236506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.236559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.252664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.252701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.266260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.266297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.281596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.281633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.298689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.298742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.315082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.315119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.331435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.331475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.347996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.348035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.364020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.364071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.377689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.377729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.393438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.393488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.409617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.409666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.425806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.425847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.442325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.442361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.458962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.459013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.472465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.472514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.996 [2024-07-13 05:07:44.488837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.996 [2024-07-13 05:07:44.488880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.505019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.505057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.517982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.518034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.532862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.532914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.548489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.548529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.564142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.564193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.579840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.579893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.595293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.595333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.611149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.611204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.625633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.625681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.641859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.641907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.657445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.657495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.673194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.673234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.686377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.686411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.702114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.702167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.718310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.718345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.734441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.734481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.257 [2024-07-13 05:07:44.750607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.257 [2024-07-13 05:07:44.750647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.767406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.767441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.783010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.783061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.799340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.799382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.815383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.815423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.831539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.831579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.845267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.845307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.861155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.861190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.877520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.877555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.893605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.893646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.910146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.910197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.926138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.926189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.942453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.942493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.958331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.516 [2024-07-13 05:07:44.958380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.516 [2024-07-13 05:07:44.974410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.517 [2024-07-13 05:07:44.974450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.517 [2024-07-13 05:07:44.991338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.517 [2024-07-13 05:07:44.991378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.517 [2024-07-13 05:07:45.006534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.517 [2024-07-13 05:07:45.006589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.022949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.023012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.038522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.038577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.055357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.055398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.072015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.072065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.087009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.087045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.092573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.092611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 00:18:38.776 Latency(us) 00:18:38.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.776 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.776 Nvme1n1 : 5.01 7917.57 61.86 0.00 0.00 16136.68 4951.61 25826.04 00:18:38.776 =================================================================================================================== 00:18:38.776 Total : 7917.57 61.86 0.00 0.00 16136.68 4951.61 25826.04 00:18:38.776 [2024-07-13 05:07:45.100688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.100725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.108695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.108732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.116708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.116743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.124734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.124767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.132742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.132775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.776 [2024-07-13 05:07:45.140780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.776 [2024-07-13 05:07:45.140812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.156968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.157048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.164855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.164898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.172874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.172906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.180900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.180946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.188906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.188950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.196959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.196987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.204959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.204986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.213003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.213031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.221034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.221062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.229055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.229101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.237149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.237207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.245169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.245220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.253058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.253087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.261124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.261170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.777 [2024-07-13 05:07:45.269111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.777 [2024-07-13 05:07:45.269139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.277143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.277185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.285179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.285206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.293237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.293270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.301251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.301285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.309284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.309312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.317271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.317297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.325322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.325349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.333302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.333328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.341327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.341354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.349346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.349372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.357376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.357402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.365411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.365438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.373430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.373459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.381487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.381528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.389682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.389745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.397475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.397502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.405510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.405537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.413536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.413563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.421537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.421563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.429590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.429617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.437613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.437643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.445715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.445772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.453810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.453888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.461793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.461857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.469709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.469736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.477731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.477760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.485710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.036 [2024-07-13 05:07:45.485737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.036 [2024-07-13 05:07:45.493761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.493789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.037 [2024-07-13 05:07:45.501773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.501800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.037 [2024-07-13 05:07:45.509800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.509827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.037 [2024-07-13 05:07:45.517819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.517860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.037 [2024-07-13 05:07:45.525821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.525863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.037 [2024-07-13 05:07:45.533917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.037 [2024-07-13 05:07:45.533947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.541924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.541952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.549935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.549963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.557959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.557987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.565971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.566000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.573980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.574008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.582033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.582070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.590040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.590068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.598060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.598088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.606122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.606158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.614192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.614253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.622190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.622242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.630168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.630195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.638169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.638196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.646203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.646244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.654226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.654252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.662268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.662295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.670269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.670295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.678289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.678315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.686324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.686350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.694348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.694376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.702349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.702375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.710385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.710411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.718391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.718417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.726555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.726612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.734527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.734567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.742478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.742505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.750495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.750521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.758515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.758548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.766527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.766553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.774564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.774590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.782568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.782594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.297 [2024-07-13 05:07:45.790606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.297 [2024-07-13 05:07:45.790633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.798644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.798672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.806642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.806668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.814790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.814848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.822747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.822785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.830705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.830732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.838759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.838785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.846748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.846774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.854781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.854807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.862808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.862836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.870815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.870841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.878879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.878907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.886899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.886927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.894899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.894942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.902968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.902997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.910966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.911007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.919102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.919166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.927016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.927044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.935030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.935058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.943042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.943070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.951086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.951114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.959149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.959191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.967230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.967285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.975136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.975179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.983185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.983227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.991233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.991260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:45.999230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:45.999256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.007266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.007293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.015272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.015299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.023282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.023308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.031342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.031369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.039328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.039354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.047367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.047394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.558 [2024-07-13 05:07:46.055409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.558 [2024-07-13 05:07:46.055436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.818 [2024-07-13 05:07:46.063399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.818 [2024-07-13 05:07:46.063431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.818 [2024-07-13 05:07:46.071466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.818 [2024-07-13 05:07:46.071499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.818 [2024-07-13 05:07:46.079490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.818 [2024-07-13 05:07:46.079522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.819 [2024-07-13 05:07:46.087490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.819 [2024-07-13 05:07:46.087522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.819 [2024-07-13 05:07:46.095528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.819 [2024-07-13 05:07:46.095561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.819 [2024-07-13 05:07:46.103528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.819 [2024-07-13 05:07:46.103560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.819 [2024-07-13 05:07:46.111574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.819 [2024-07-13 05:07:46.111606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (690262) - No such process 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 690262 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 delay0 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.819 05:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:39.819 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.819 [2024-07-13 05:07:46.290591] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:46.398 Initializing NVMe Controllers 00:18:46.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.398 Initialization complete. Launching workers. 00:18:46.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 797 00:18:46.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1084, failed to submit 33 00:18:46.398 success 931, unsuccess 153, failed 0 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.398 rmmod nvme_tcp 00:18:46.398 rmmod nvme_fabrics 00:18:46.398 rmmod nvme_keyring 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 688672 ']' 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 688672 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 688672 ']' 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 688672 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 688672 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 688672' 00:18:46.398 killing process with pid 688672 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 688672 00:18:46.398 05:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 688672 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.773 05:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.307 05:07:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.307 00:18:50.307 real 0m32.429s 00:18:50.307 user 0m48.528s 00:18:50.307 sys 0m8.278s 00:18:50.307 05:07:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.307 05:07:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.307 ************************************ 00:18:50.307 END TEST nvmf_zcopy 00:18:50.307 ************************************ 00:18:50.307 05:07:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:50.307 05:07:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:50.307 05:07:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:50.307 05:07:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.307 05:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.307 ************************************ 00:18:50.307 START TEST nvmf_nmic 00:18:50.307 ************************************ 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:50.307 * Looking for test storage... 00:18:50.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.307 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.308 05:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.238 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:52.239 00:18:52.239 --- 10.0.0.2 ping statistics --- 00:18:52.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.239 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:18:52.239 00:18:52.239 --- 10.0.0.1 ping statistics --- 00:18:52.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.239 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=693911 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 693911 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 693911 ']' 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.239 05:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:52.239 [2024-07-13 05:07:58.523570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.239 [2024-07-13 05:07:58.523738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.239 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.239 [2024-07-13 05:07:58.662478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.499 [2024-07-13 05:07:58.927175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.499 [2024-07-13 05:07:58.927249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.499 [2024-07-13 05:07:58.927277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.499 [2024-07-13 05:07:58.927298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.499 [2024-07-13 05:07:58.927320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.499 [2024-07-13 05:07:58.927437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.499 [2024-07-13 05:07:58.927519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.499 [2024-07-13 05:07:58.927775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.499 [2024-07-13 05:07:58.927782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 [2024-07-13 05:07:59.473477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 Malloc0 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.065 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.324 [2024-07-13 05:07:59.575793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:53.324 test case1: single bdev can't be used in multiple subsystems 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.324 [2024-07-13 05:07:59.599646] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:53.324 [2024-07-13 05:07:59.599704] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:53.324 [2024-07-13 05:07:59.599729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.324 request: 00:18:53.324 { 00:18:53.324 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:53.324 "namespace": { 00:18:53.324 "bdev_name": "Malloc0", 00:18:53.324 "no_auto_visible": false 00:18:53.324 }, 00:18:53.324 "method": "nvmf_subsystem_add_ns", 00:18:53.324 "req_id": 1 00:18:53.324 } 00:18:53.324 Got JSON-RPC error response 00:18:53.324 response: 00:18:53.324 { 00:18:53.324 "code": -32602, 00:18:53.324 "message": "Invalid parameters" 00:18:53.324 } 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:53.324 Adding namespace failed - expected result. 00:18:53.324 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:53.324 test case2: host connect to nvmf target in multiple paths 00:18:53.325 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:53.325 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.325 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:53.325 [2024-07-13 05:07:59.607775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:53.325 05:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.325 05:07:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:53.893 05:08:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:54.461 05:08:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:54.461 05:08:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.461 05:08:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.461 05:08:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:54.461 05:08:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.992 05:08:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:56.993 05:08:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:56.993 [global] 00:18:56.993 thread=1 00:18:56.993 invalidate=1 00:18:56.993 rw=write 00:18:56.993 time_based=1 00:18:56.993 runtime=1 00:18:56.993 ioengine=libaio 00:18:56.993 direct=1 00:18:56.993 bs=4096 00:18:56.993 iodepth=1 00:18:56.993 norandommap=0 00:18:56.993 numjobs=1 00:18:56.993 00:18:56.993 verify_dump=1 00:18:56.993 verify_backlog=512 00:18:56.993 verify_state_save=0 00:18:56.993 do_verify=1 00:18:56.993 verify=crc32c-intel 00:18:56.993 [job0] 00:18:56.993 filename=/dev/nvme0n1 00:18:56.993 Could not set queue depth (nvme0n1) 00:18:56.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.993 fio-3.35 00:18:56.993 Starting 1 thread 00:18:57.928 00:18:57.928 job0: (groupid=0, jobs=1): err= 0: pid=694551: Sat Jul 13 05:08:04 2024 00:18:57.928 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:18:57.928 slat (nsec): min=7319, max=42938, avg=21921.24, stdev=10326.34 00:18:57.928 clat (usec): min=40890, max=41028, avg=40968.41, stdev=39.82 00:18:57.928 lat (usec): min=40924, max=41046, avg=40990.33, stdev=34.60 00:18:57.928 clat percentiles (usec): 00:18:57.928 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:57.928 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:57.928 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:57.928 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:57.928 | 99.99th=[41157] 00:18:57.928 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:18:57.928 slat (nsec): min=7218, max=62270, avg=18035.74, stdev=8035.36 00:18:57.928 clat (usec): min=201, max=540, avg=256.39, stdev=28.98 00:18:57.928 lat (usec): min=209, max=561, avg=274.42, stdev=31.50 00:18:57.928 clat percentiles (usec): 00:18:57.928 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 237], 00:18:57.928 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:18:57.928 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 281], 95.00th=[ 289], 00:18:57.928 | 99.00th=[ 314], 99.50th=[ 424], 99.90th=[ 537], 99.95th=[ 537], 00:18:57.928 | 99.99th=[ 537] 00:18:57.928 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:57.928 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:57.928 lat (usec) : 250=42.21%, 500=53.66%, 750=0.19% 00:18:57.928 lat (msec) : 50=3.94% 00:18:57.928 cpu : usr=0.40%, sys=1.50%, ctx=533, majf=0, minf=2 00:18:57.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.928 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.928 00:18:57.928 Run status group 0 (all jobs): 00:18:57.928 READ: bw=83.7KiB/s (85.7kB/s), 83.7KiB/s-83.7KiB/s (85.7kB/s-85.7kB/s), io=84.0KiB (86.0kB), run=1004-1004msec 00:18:57.928 WRITE: bw=2040KiB/s (2089kB/s), 2040KiB/s-2040KiB/s (2089kB/s-2089kB/s), io=2048KiB (2097kB), run=1004-1004msec 00:18:57.928 00:18:57.928 Disk stats (read/write): 00:18:57.928 nvme0n1: ios=68/512, merge=0/0, ticks=773/126, in_queue=899, util=91.98% 00:18:57.928 05:08:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.186 rmmod nvme_tcp 00:18:58.186 rmmod nvme_fabrics 00:18:58.186 rmmod nvme_keyring 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 693911 ']' 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 693911 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 693911 ']' 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 693911 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 693911 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 693911' 00:18:58.186 killing process with pid 693911 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 693911 00:18:58.186 05:08:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 693911 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.090 05:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.994 05:08:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.994 00:19:01.994 real 0m11.794s 00:19:01.994 user 0m27.959s 00:19:01.994 sys 0m2.499s 00:19:01.994 05:08:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.994 05:08:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:01.994 ************************************ 00:19:01.994 END TEST nvmf_nmic 00:19:01.994 ************************************ 00:19:01.994 05:08:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:01.994 05:08:08 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:01.994 05:08:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:01.994 05:08:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.994 05:08:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.994 ************************************ 00:19:01.994 START TEST nvmf_fio_target 00:19:01.994 ************************************ 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:01.994 * Looking for test storage... 00:19:01.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.994 05:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:03.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:03.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.899 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:03.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:03.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:19:03.900 00:19:03.900 --- 10.0.0.2 ping statistics --- 00:19:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.900 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:19:03.900 00:19:03.900 --- 10.0.0.1 ping statistics --- 00:19:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.900 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=696753 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 696753 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 696753 ']' 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.900 05:08:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.158 [2024-07-13 05:08:10.460637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:04.158 [2024-07-13 05:08:10.460789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.159 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.159 [2024-07-13 05:08:10.602383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.418 [2024-07-13 05:08:10.871753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.418 [2024-07-13 05:08:10.871837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.418 [2024-07-13 05:08:10.871875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.418 [2024-07-13 05:08:10.871900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.418 [2024-07-13 05:08:10.871923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.418 [2024-07-13 05:08:10.872031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.418 [2024-07-13 05:08:10.872095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.418 [2024-07-13 05:08:10.872140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.418 [2024-07-13 05:08:10.872152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.986 05:08:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:05.245 [2024-07-13 05:08:11.606773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.245 05:08:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:05.504 05:08:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:05.504 05:08:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.073 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:06.073 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.331 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:06.331 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.589 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:06.589 05:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:06.845 05:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.102 05:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:07.102 05:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.368 05:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:07.368 05:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.643 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:07.643 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:07.900 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:08.158 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:08.158 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.416 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:08.416 05:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.674 05:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.931 [2024-07-13 05:08:15.321273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.932 05:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:09.189 05:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:09.455 05:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:10.020 05:08:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:10.021 05:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:10.021 05:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.021 05:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:10.021 05:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:10.021 05:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:12.553 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:12.554 05:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:12.554 [global] 00:19:12.554 thread=1 00:19:12.554 invalidate=1 00:19:12.554 rw=write 00:19:12.554 time_based=1 00:19:12.554 runtime=1 00:19:12.554 ioengine=libaio 00:19:12.554 direct=1 00:19:12.554 bs=4096 00:19:12.554 iodepth=1 00:19:12.554 norandommap=0 00:19:12.554 numjobs=1 00:19:12.554 00:19:12.554 verify_dump=1 00:19:12.554 verify_backlog=512 00:19:12.554 verify_state_save=0 00:19:12.554 do_verify=1 00:19:12.554 verify=crc32c-intel 00:19:12.554 [job0] 00:19:12.554 filename=/dev/nvme0n1 00:19:12.554 [job1] 00:19:12.554 filename=/dev/nvme0n2 00:19:12.554 [job2] 00:19:12.554 filename=/dev/nvme0n3 00:19:12.554 [job3] 00:19:12.554 filename=/dev/nvme0n4 00:19:12.554 Could not set queue depth (nvme0n1) 00:19:12.554 Could not set queue depth (nvme0n2) 00:19:12.554 Could not set queue depth (nvme0n3) 00:19:12.554 Could not set queue depth (nvme0n4) 00:19:12.554 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.554 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.554 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.554 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.554 fio-3.35 00:19:12.554 Starting 4 threads 00:19:13.487 00:19:13.487 job0: (groupid=0, jobs=1): err= 0: pid=697958: Sat Jul 13 05:08:19 2024 00:19:13.487 read: IOPS=19, BW=79.5KiB/s (81.4kB/s)(80.0KiB/1006msec) 00:19:13.487 slat (nsec): min=13491, max=33766, avg=18719.50, stdev=7583.19 00:19:13.487 clat (usec): min=40673, max=41979, avg=41257.34, stdev=473.47 00:19:13.487 lat (usec): min=40691, max=42013, avg=41276.06, stdev=477.04 00:19:13.487 clat percentiles (usec): 00:19:13.487 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:13.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:13.487 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:13.487 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:13.487 | 99.99th=[42206] 00:19:13.487 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:19:13.487 slat (nsec): min=7317, max=64518, avg=26355.17, stdev=12283.01 00:19:13.487 clat (usec): min=223, max=627, avg=318.64, stdev=56.01 00:19:13.487 lat (usec): min=232, max=639, avg=344.99, stdev=59.02 00:19:13.487 clat percentiles (usec): 00:19:13.487 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 269], 00:19:13.487 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 326], 00:19:13.487 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 416], 00:19:13.488 | 99.00th=[ 457], 99.50th=[ 537], 99.90th=[ 627], 99.95th=[ 627], 00:19:13.488 | 99.99th=[ 627] 00:19:13.488 bw ( KiB/s): min= 4096, max= 4096, per=33.53%, avg=4096.00, stdev= 0.00, samples=1 00:19:13.488 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:13.488 lat (usec) : 250=6.20%, 500=89.47%, 750=0.56% 00:19:13.488 lat (msec) : 50=3.76% 00:19:13.488 cpu : usr=0.40%, sys=1.49%, ctx=533, majf=0, minf=2 00:19:13.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.488 job1: (groupid=0, jobs=1): err= 0: pid=697959: Sat Jul 13 05:08:19 2024 00:19:13.488 read: IOPS=1027, BW=4112KiB/s (4211kB/s)(4116KiB/1001msec) 00:19:13.488 slat (nsec): min=6793, max=50768, avg=12890.63, stdev=5792.68 00:19:13.488 clat (usec): min=282, max=40993, avg=550.33, stdev=2822.72 00:19:13.488 lat (usec): min=289, max=41010, avg=563.22, stdev=2822.85 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:19:13.488 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 367], 00:19:13.488 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 412], 00:19:13.488 | 99.00th=[ 519], 99.50th=[ 1037], 99.90th=[41157], 99.95th=[41157], 00:19:13.488 | 99.99th=[41157] 00:19:13.488 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:13.488 slat (nsec): min=5986, max=53010, avg=13607.06, stdev=7615.25 00:19:13.488 clat (usec): min=191, max=1181, avg=253.64, stdev=65.55 00:19:13.488 lat (usec): min=199, max=1195, avg=267.25, stdev=68.91 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:19:13.488 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 247], 00:19:13.488 | 70.00th=[ 260], 80.00th=[ 297], 90.00th=[ 338], 95.00th=[ 363], 00:19:13.488 | 99.00th=[ 416], 99.50th=[ 453], 99.90th=[ 1156], 99.95th=[ 1188], 00:19:13.488 | 99.99th=[ 1188] 00:19:13.488 bw ( KiB/s): min= 8192, max= 8192, per=67.07%, avg=8192.00, stdev= 0.00, samples=1 00:19:13.488 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:13.488 lat (usec) : 250=37.89%, 500=61.40%, 750=0.35%, 1000=0.04% 00:19:13.488 lat (msec) : 2=0.12%, 50=0.19% 00:19:13.488 cpu : usr=3.50%, sys=3.50%, ctx=2567, majf=0, minf=1 00:19:13.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.488 job2: (groupid=0, jobs=1): err= 0: pid=697960: Sat Jul 13 05:08:19 2024 00:19:13.488 read: IOPS=19, BW=79.5KiB/s (81.4kB/s)(80.0KiB/1006msec) 00:19:13.488 slat (nsec): min=13187, max=38227, avg=21731.45, stdev=8948.82 00:19:13.488 clat (usec): min=425, max=41968, avg=39132.71, stdev=9117.76 00:19:13.488 lat (usec): min=443, max=41998, avg=39154.44, stdev=9118.92 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[ 424], 5.00th=[ 424], 10.00th=[41157], 20.00th=[41157], 00:19:13.488 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:13.488 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:13.488 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:13.488 | 99.99th=[42206] 00:19:13.488 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:19:13.488 slat (nsec): min=10357, max=73981, avg=32126.11, stdev=12604.94 00:19:13.488 clat (usec): min=305, max=614, avg=394.22, stdev=39.33 00:19:13.488 lat (usec): min=328, max=631, avg=426.35, stdev=43.70 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 359], 00:19:13.488 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:19:13.488 | 70.00th=[ 416], 80.00th=[ 429], 90.00th=[ 445], 95.00th=[ 457], 00:19:13.488 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 619], 99.95th=[ 619], 00:19:13.488 | 99.99th=[ 619] 00:19:13.488 bw ( KiB/s): min= 4096, max= 4096, per=33.53%, avg=4096.00, stdev= 0.00, samples=1 00:19:13.488 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:13.488 lat (usec) : 500=96.05%, 750=0.38% 00:19:13.488 lat (msec) : 50=3.57% 00:19:13.488 cpu : usr=1.09%, sys=1.99%, ctx=533, majf=0, minf=1 00:19:13.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.488 job3: (groupid=0, jobs=1): err= 0: pid=697961: Sat Jul 13 05:08:19 2024 00:19:13.488 read: IOPS=18, BW=75.8KiB/s (77.6kB/s)(76.0KiB/1003msec) 00:19:13.488 slat (nsec): min=13077, max=34411, avg=17730.00, stdev=6276.32 00:19:13.488 clat (usec): min=40760, max=41050, avg=40963.52, stdev=63.56 00:19:13.488 lat (usec): min=40784, max=41066, avg=40981.25, stdev=59.89 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:13.488 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:13.488 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:13.488 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:13.488 | 99.99th=[41157] 00:19:13.488 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:13.488 slat (nsec): min=8361, max=66296, avg=30333.15, stdev=12907.51 00:19:13.488 clat (usec): min=236, max=2360, avg=399.41, stdev=158.50 00:19:13.488 lat (usec): min=245, max=2402, avg=429.74, stdev=161.29 00:19:13.488 clat percentiles (usec): 00:19:13.488 | 1.00th=[ 258], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 318], 00:19:13.488 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 388], 00:19:13.488 | 70.00th=[ 412], 80.00th=[ 449], 90.00th=[ 502], 95.00th=[ 570], 00:19:13.488 | 99.00th=[ 750], 99.50th=[ 1713], 99.90th=[ 2376], 99.95th=[ 2376], 00:19:13.488 | 99.99th=[ 2376] 00:19:13.488 bw ( KiB/s): min= 4096, max= 4096, per=33.53%, avg=4096.00, stdev= 0.00, samples=1 00:19:13.488 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:13.488 lat (usec) : 250=0.19%, 500=86.25%, 750=9.23% 00:19:13.488 lat (msec) : 2=0.56%, 4=0.19%, 50=3.58% 00:19:13.488 cpu : usr=0.70%, sys=1.50%, ctx=532, majf=0, minf=1 00:19:13.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.488 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.488 00:19:13.488 Run status group 0 (all jobs): 00:19:13.488 READ: bw=4326KiB/s (4430kB/s), 75.8KiB/s-4112KiB/s (77.6kB/s-4211kB/s), io=4352KiB (4456kB), run=1001-1006msec 00:19:13.488 WRITE: bw=11.9MiB/s (12.5MB/s), 2036KiB/s-6138KiB/s (2085kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1006msec 00:19:13.488 00:19:13.488 Disk stats (read/write): 00:19:13.488 nvme0n1: ios=65/512, merge=0/0, ticks=1467/155, in_queue=1622, util=85.67% 00:19:13.488 nvme0n2: ios=1075/1536, merge=0/0, ticks=452/379, in_queue=831, util=91.45% 00:19:13.488 nvme0n3: ios=73/512, merge=0/0, ticks=842/164, in_queue=1006, util=93.53% 00:19:13.488 nvme0n4: ios=79/512, merge=0/0, ticks=718/197, in_queue=915, util=96.11% 00:19:13.488 05:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:13.488 [global] 00:19:13.488 thread=1 00:19:13.488 invalidate=1 00:19:13.488 rw=randwrite 00:19:13.488 time_based=1 00:19:13.488 runtime=1 00:19:13.488 ioengine=libaio 00:19:13.488 direct=1 00:19:13.488 bs=4096 00:19:13.488 iodepth=1 00:19:13.488 norandommap=0 00:19:13.488 numjobs=1 00:19:13.488 00:19:13.488 verify_dump=1 00:19:13.488 verify_backlog=512 00:19:13.488 verify_state_save=0 00:19:13.488 do_verify=1 00:19:13.488 verify=crc32c-intel 00:19:13.488 [job0] 00:19:13.488 filename=/dev/nvme0n1 00:19:13.488 [job1] 00:19:13.488 filename=/dev/nvme0n2 00:19:13.488 [job2] 00:19:13.488 filename=/dev/nvme0n3 00:19:13.488 [job3] 00:19:13.488 filename=/dev/nvme0n4 00:19:13.488 Could not set queue depth (nvme0n1) 00:19:13.488 Could not set queue depth (nvme0n2) 00:19:13.488 Could not set queue depth (nvme0n3) 00:19:13.488 Could not set queue depth (nvme0n4) 00:19:13.746 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.746 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.746 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.746 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.746 fio-3.35 00:19:13.746 Starting 4 threads 00:19:15.120 00:19:15.120 job0: (groupid=0, jobs=1): err= 0: pid=698187: Sat Jul 13 05:08:21 2024 00:19:15.120 read: IOPS=18, BW=75.1KiB/s (76.9kB/s)(76.0KiB/1012msec) 00:19:15.120 slat (nsec): min=15195, max=33689, avg=24769.37, stdev=6211.95 00:19:15.120 clat (usec): min=40878, max=42332, avg=41351.96, stdev=535.34 00:19:15.120 lat (usec): min=40911, max=42364, avg=41376.73, stdev=534.51 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:15.120 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:15.120 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:15.120 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.120 | 99.99th=[42206] 00:19:15.120 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:15.120 slat (nsec): min=8971, max=76976, avg=30457.22, stdev=13561.53 00:19:15.120 clat (usec): min=283, max=518, avg=400.49, stdev=47.20 00:19:15.120 lat (usec): min=295, max=559, avg=430.95, stdev=51.38 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 359], 00:19:15.120 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 416], 00:19:15.120 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 461], 95.00th=[ 474], 00:19:15.120 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 519], 99.95th=[ 519], 00:19:15.120 | 99.99th=[ 519] 00:19:15.120 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.120 lat (usec) : 500=95.10%, 750=1.32% 00:19:15.120 lat (msec) : 50=3.58% 00:19:15.120 cpu : usr=1.38%, sys=1.68%, ctx=531, majf=0, minf=2 00:19:15.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.120 job1: (groupid=0, jobs=1): err= 0: pid=698188: Sat Jul 13 05:08:21 2024 00:19:15.120 read: IOPS=19, BW=78.6KiB/s (80.5kB/s)(80.0KiB/1018msec) 00:19:15.120 slat (nsec): min=14617, max=32700, avg=20434.00, stdev=7588.40 00:19:15.120 clat (usec): min=40877, max=42025, avg=41101.86, stdev=344.43 00:19:15.120 lat (usec): min=40904, max=42040, avg=41122.30, stdev=344.14 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:15.120 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:15.120 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:15.120 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.120 | 99.99th=[42206] 00:19:15.120 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:19:15.120 slat (nsec): min=6531, max=72953, avg=23897.74, stdev=12043.59 00:19:15.120 clat (usec): min=246, max=550, avg=350.34, stdev=53.32 00:19:15.120 lat (usec): min=259, max=587, avg=374.24, stdev=57.01 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:19:15.120 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 363], 00:19:15.120 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:19:15.120 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 553], 99.95th=[ 553], 00:19:15.120 | 99.99th=[ 553] 00:19:15.120 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.120 lat (usec) : 250=0.19%, 500=95.86%, 750=0.19% 00:19:15.120 lat (msec) : 50=3.76% 00:19:15.120 cpu : usr=0.49%, sys=1.38%, ctx=532, majf=0, minf=1 00:19:15.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.120 job2: (groupid=0, jobs=1): err= 0: pid=698189: Sat Jul 13 05:08:21 2024 00:19:15.120 read: IOPS=33, BW=135KiB/s (138kB/s)(136KiB/1006msec) 00:19:15.120 slat (nsec): min=8166, max=34397, avg=15982.35, stdev=7680.74 00:19:15.120 clat (usec): min=340, max=41060, avg=22911.87, stdev=20255.75 00:19:15.120 lat (usec): min=358, max=41077, avg=22927.85, stdev=20260.43 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 343], 5.00th=[ 474], 10.00th=[ 478], 20.00th=[ 490], 00:19:15.120 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[40633], 60.00th=[41157], 00:19:15.120 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:15.120 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:15.120 | 99.99th=[41157] 00:19:15.120 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:19:15.120 slat (nsec): min=8214, max=72432, avg=28562.26, stdev=12962.05 00:19:15.120 clat (usec): min=224, max=2536, avg=404.89, stdev=150.42 00:19:15.120 lat (usec): min=242, max=2556, avg=433.45, stdev=151.28 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 233], 5.00th=[ 258], 10.00th=[ 281], 20.00th=[ 326], 00:19:15.120 | 30.00th=[ 351], 40.00th=[ 375], 50.00th=[ 396], 60.00th=[ 416], 00:19:15.120 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 498], 95.00th=[ 537], 00:19:15.120 | 99.00th=[ 725], 99.50th=[ 1565], 99.90th=[ 2540], 99.95th=[ 2540], 00:19:15.120 | 99.99th=[ 2540] 00:19:15.120 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.120 lat (usec) : 250=3.30%, 500=83.52%, 750=8.97% 00:19:15.120 lat (msec) : 2=0.55%, 4=0.18%, 50=3.48% 00:19:15.120 cpu : usr=0.70%, sys=1.59%, ctx=548, majf=0, minf=1 00:19:15.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.120 job3: (groupid=0, jobs=1): err= 0: pid=698190: Sat Jul 13 05:08:21 2024 00:19:15.120 read: IOPS=375, BW=1502KiB/s (1539kB/s)(1516KiB/1009msec) 00:19:15.120 slat (nsec): min=6152, max=34030, avg=13761.85, stdev=7372.82 00:19:15.120 clat (usec): min=285, max=42065, avg=2176.49, stdev=8439.93 00:19:15.120 lat (usec): min=292, max=42082, avg=2190.25, stdev=8441.15 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 326], 00:19:15.120 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:19:15.120 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 553], 00:19:15.120 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.120 | 99.99th=[42206] 00:19:15.120 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:15.120 slat (nsec): min=6659, max=74321, avg=24204.33, stdev=12138.75 00:19:15.120 clat (usec): min=235, max=477, avg=314.86, stdev=50.64 00:19:15.120 lat (usec): min=244, max=488, avg=339.07, stdev=53.02 00:19:15.120 clat percentiles (usec): 00:19:15.120 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:19:15.120 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 318], 00:19:15.120 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 404], 00:19:15.120 | 99.00th=[ 429], 99.50th=[ 433], 99.90th=[ 478], 99.95th=[ 478], 00:19:15.120 | 99.99th=[ 478] 00:19:15.120 bw ( KiB/s): min= 4096, max= 4096, per=50.90%, avg=4096.00, stdev= 0.00, samples=1 00:19:15.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:15.120 lat (usec) : 250=2.81%, 500=94.73%, 750=0.56% 00:19:15.120 lat (msec) : 50=1.91% 00:19:15.120 cpu : usr=0.89%, sys=1.69%, ctx=892, majf=0, minf=1 00:19:15.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.120 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.120 00:19:15.120 Run status group 0 (all jobs): 00:19:15.120 READ: bw=1776KiB/s (1819kB/s), 75.1KiB/s-1502KiB/s (76.9kB/s-1539kB/s), io=1808KiB (1851kB), run=1006-1018msec 00:19:15.120 WRITE: bw=8047KiB/s (8240kB/s), 2012KiB/s-2036KiB/s (2060kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1018msec 00:19:15.120 00:19:15.120 Disk stats (read/write): 00:19:15.120 nvme0n1: ios=65/512, merge=0/0, ticks=656/150, in_queue=806, util=87.17% 00:19:15.120 nvme0n2: ios=41/512, merge=0/0, ticks=645/157, in_queue=802, util=87.30% 00:19:15.120 nvme0n3: ios=84/512, merge=0/0, ticks=960/177, in_queue=1137, util=98.12% 00:19:15.120 nvme0n4: ios=398/512, merge=0/0, ticks=1606/150, in_queue=1756, util=98.00% 00:19:15.120 05:08:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:15.120 [global] 00:19:15.120 thread=1 00:19:15.120 invalidate=1 00:19:15.120 rw=write 00:19:15.120 time_based=1 00:19:15.120 runtime=1 00:19:15.120 ioengine=libaio 00:19:15.120 direct=1 00:19:15.120 bs=4096 00:19:15.120 iodepth=128 00:19:15.120 norandommap=0 00:19:15.120 numjobs=1 00:19:15.120 00:19:15.121 verify_dump=1 00:19:15.121 verify_backlog=512 00:19:15.121 verify_state_save=0 00:19:15.121 do_verify=1 00:19:15.121 verify=crc32c-intel 00:19:15.121 [job0] 00:19:15.121 filename=/dev/nvme0n1 00:19:15.121 [job1] 00:19:15.121 filename=/dev/nvme0n2 00:19:15.121 [job2] 00:19:15.121 filename=/dev/nvme0n3 00:19:15.121 [job3] 00:19:15.121 filename=/dev/nvme0n4 00:19:15.121 Could not set queue depth (nvme0n1) 00:19:15.121 Could not set queue depth (nvme0n2) 00:19:15.121 Could not set queue depth (nvme0n3) 00:19:15.121 Could not set queue depth (nvme0n4) 00:19:15.121 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.121 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.121 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.121 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.121 fio-3.35 00:19:15.121 Starting 4 threads 00:19:16.494 00:19:16.494 job0: (groupid=0, jobs=1): err= 0: pid=698413: Sat Jul 13 05:08:22 2024 00:19:16.494 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1005msec) 00:19:16.494 slat (usec): min=3, max=6592, avg=119.18, stdev=608.44 00:19:16.494 clat (usec): min=2664, max=25549, avg=15432.80, stdev=1990.61 00:19:16.494 lat (usec): min=7108, max=25554, avg=15551.98, stdev=2043.92 00:19:16.494 clat percentiles (usec): 00:19:16.494 | 1.00th=[ 7504], 5.00th=[12911], 10.00th=[13829], 20.00th=[14615], 00:19:16.494 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:19:16.494 | 70.00th=[15533], 80.00th=[16909], 90.00th=[18220], 95.00th=[19006], 00:19:16.494 | 99.00th=[20841], 99.50th=[21365], 99.90th=[25560], 99.95th=[25560], 00:19:16.494 | 99.99th=[25560] 00:19:16.494 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:16.494 slat (usec): min=5, max=7688, avg=116.78, stdev=628.47 00:19:16.494 clat (usec): min=8015, max=24836, avg=16129.64, stdev=1557.12 00:19:16.494 lat (usec): min=8037, max=24888, avg=16246.42, stdev=1638.28 00:19:16.494 clat percentiles (usec): 00:19:16.494 | 1.00th=[10945], 5.00th=[13698], 10.00th=[14877], 20.00th=[15401], 00:19:16.494 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16057], 60.00th=[16188], 00:19:16.494 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[18744], 00:19:16.494 | 99.00th=[21365], 99.50th=[22152], 99.90th=[23987], 99.95th=[24249], 00:19:16.494 | 99.99th=[24773] 00:19:16.494 bw ( KiB/s): min=16384, max=16384, per=25.65%, avg=16384.00, stdev= 0.00, samples=2 00:19:16.494 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:16.494 lat (msec) : 4=0.01%, 10=0.71%, 20=97.32%, 50=1.96% 00:19:16.494 cpu : usr=6.67%, sys=11.06%, ctx=425, majf=0, minf=11 00:19:16.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:16.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.494 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.494 job1: (groupid=0, jobs=1): err= 0: pid=698414: Sat Jul 13 05:08:22 2024 00:19:16.494 read: IOPS=3238, BW=12.6MiB/s (13.3MB/s)(12.8MiB/1008msec) 00:19:16.494 slat (usec): min=3, max=16668, avg=156.63, stdev=1064.82 00:19:16.494 clat (usec): min=6347, max=36696, avg=19373.52, stdev=5570.38 00:19:16.494 lat (usec): min=6362, max=36716, avg=19530.14, stdev=5622.75 00:19:16.494 clat percentiles (usec): 00:19:16.494 | 1.00th=[ 7570], 5.00th=[11863], 10.00th=[13566], 20.00th=[15270], 00:19:16.494 | 30.00th=[16581], 40.00th=[17171], 50.00th=[18220], 60.00th=[19530], 00:19:16.494 | 70.00th=[21103], 80.00th=[23200], 90.00th=[27657], 95.00th=[30278], 00:19:16.494 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:19:16.494 | 99.99th=[36439] 00:19:16.494 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:19:16.494 slat (usec): min=4, max=16650, avg=124.10, stdev=581.53 00:19:16.494 clat (usec): min=1915, max=36703, avg=17847.51, stdev=4669.05 00:19:16.494 lat (usec): min=1928, max=36730, avg=17971.62, stdev=4704.23 00:19:16.494 clat percentiles (usec): 00:19:16.494 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[15008], 00:19:16.494 | 30.00th=[16909], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:19:16.494 | 70.00th=[19006], 80.00th=[20317], 90.00th=[23987], 95.00th=[26346], 00:19:16.494 | 99.00th=[28705], 99.50th=[28967], 99.90th=[36439], 99.95th=[36439], 00:19:16.494 | 99.99th=[36963] 00:19:16.494 bw ( KiB/s): min=13520, max=15152, per=22.45%, avg=14336.00, stdev=1154.00, samples=2 00:19:16.494 iops : min= 3380, max= 3788, avg=3584.00, stdev=288.50, samples=2 00:19:16.494 lat (msec) : 2=0.04%, 10=5.56%, 20=65.51%, 50=28.88% 00:19:16.494 cpu : usr=5.56%, sys=8.64%, ctx=460, majf=0, minf=9 00:19:16.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:16.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.495 issued rwts: total=3264,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.495 job2: (groupid=0, jobs=1): err= 0: pid=698415: Sat Jul 13 05:08:22 2024 00:19:16.495 read: IOPS=3987, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1006msec) 00:19:16.495 slat (usec): min=3, max=7095, avg=119.93, stdev=665.26 00:19:16.495 clat (usec): min=2004, max=22472, avg=14951.25, stdev=2194.29 00:19:16.495 lat (usec): min=6264, max=22729, avg=15071.18, stdev=2258.44 00:19:16.495 clat percentiles (usec): 00:19:16.495 | 1.00th=[ 6718], 5.00th=[11076], 10.00th=[12518], 20.00th=[14222], 00:19:16.495 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:19:16.495 | 70.00th=[15139], 80.00th=[15533], 90.00th=[17695], 95.00th=[19006], 00:19:16.495 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:19:16.495 | 99.99th=[22414] 00:19:16.495 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:16.495 slat (usec): min=5, max=14462, avg=114.09, stdev=595.40 00:19:16.495 clat (usec): min=8270, max=49612, avg=15410.49, stdev=2849.73 00:19:16.495 lat (usec): min=8373, max=49637, avg=15524.59, stdev=2898.06 00:19:16.495 clat percentiles (usec): 00:19:16.495 | 1.00th=[ 9241], 5.00th=[11600], 10.00th=[13435], 20.00th=[14484], 00:19:16.495 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:19:16.495 | 70.00th=[15664], 80.00th=[15795], 90.00th=[17433], 95.00th=[19268], 00:19:16.495 | 99.00th=[23987], 99.50th=[23987], 99.90th=[49546], 99.95th=[49546], 00:19:16.495 | 99.99th=[49546] 00:19:16.495 bw ( KiB/s): min=16384, max=16384, per=25.65%, avg=16384.00, stdev= 0.00, samples=2 00:19:16.495 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:16.495 lat (msec) : 4=0.01%, 10=2.54%, 20=93.77%, 50=3.68% 00:19:16.495 cpu : usr=7.26%, sys=11.54%, ctx=471, majf=0, minf=19 00:19:16.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:16.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.495 issued rwts: total=4011,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.495 job3: (groupid=0, jobs=1): err= 0: pid=698416: Sat Jul 13 05:08:22 2024 00:19:16.495 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:19:16.495 slat (usec): min=3, max=7107, avg=117.83, stdev=641.18 00:19:16.495 clat (usec): min=8485, max=22654, avg=15195.61, stdev=1920.71 00:19:16.495 lat (usec): min=8979, max=23216, avg=15313.44, stdev=1969.67 00:19:16.495 clat percentiles (usec): 00:19:16.495 | 1.00th=[10421], 5.00th=[11469], 10.00th=[12911], 20.00th=[14222], 00:19:16.495 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:19:16.495 | 70.00th=[15795], 80.00th=[16450], 90.00th=[17433], 95.00th=[18744], 00:19:16.495 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21890], 99.95th=[22414], 00:19:16.495 | 99.99th=[22676] 00:19:16.495 write: IOPS=4283, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1008msec); 0 zone resets 00:19:16.495 slat (usec): min=5, max=8979, avg=106.98, stdev=525.23 00:19:16.495 clat (usec): min=7067, max=23696, avg=15099.34, stdev=1882.82 00:19:16.495 lat (usec): min=7089, max=23727, avg=15206.32, stdev=1913.89 00:19:16.495 clat percentiles (usec): 00:19:16.495 | 1.00th=[ 8848], 5.00th=[11863], 10.00th=[13698], 20.00th=[14484], 00:19:16.495 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:19:16.495 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16712], 95.00th=[18220], 00:19:16.495 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:19:16.495 | 99.99th=[23725] 00:19:16.495 bw ( KiB/s): min=16384, max=17144, per=26.25%, avg=16764.00, stdev=537.40, samples=2 00:19:16.495 iops : min= 4096, max= 4286, avg=4191.00, stdev=134.35, samples=2 00:19:16.495 lat (msec) : 10=1.47%, 20=96.01%, 50=2.52% 00:19:16.495 cpu : usr=7.65%, sys=11.12%, ctx=442, majf=0, minf=11 00:19:16.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:16.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.495 issued rwts: total=4096,4318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.495 00:19:16.495 Run status group 0 (all jobs): 00:19:16.495 READ: bw=59.4MiB/s (62.3MB/s), 12.6MiB/s-15.9MiB/s (13.3MB/s-16.6MB/s), io=59.9MiB (62.8MB), run=1005-1008msec 00:19:16.495 WRITE: bw=62.4MiB/s (65.4MB/s), 13.9MiB/s-16.7MiB/s (14.6MB/s-17.5MB/s), io=62.9MiB (65.9MB), run=1005-1008msec 00:19:16.495 00:19:16.495 Disk stats (read/write): 00:19:16.495 nvme0n1: ios=3223/3584, merge=0/0, ticks=25083/26666, in_queue=51749, util=98.00% 00:19:16.495 nvme0n2: ios=2600/3071, merge=0/0, ticks=48424/54386, in_queue=102810, util=98.98% 00:19:16.495 nvme0n3: ios=3211/3584, merge=0/0, ticks=24069/24901, in_queue=48970, util=98.85% 00:19:16.495 nvme0n4: ios=3500/3584, merge=0/0, ticks=26901/24478, in_queue=51379, util=97.90% 00:19:16.495 05:08:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:16.495 [global] 00:19:16.495 thread=1 00:19:16.495 invalidate=1 00:19:16.495 rw=randwrite 00:19:16.495 time_based=1 00:19:16.495 runtime=1 00:19:16.495 ioengine=libaio 00:19:16.495 direct=1 00:19:16.495 bs=4096 00:19:16.495 iodepth=128 00:19:16.495 norandommap=0 00:19:16.495 numjobs=1 00:19:16.495 00:19:16.495 verify_dump=1 00:19:16.495 verify_backlog=512 00:19:16.495 verify_state_save=0 00:19:16.495 do_verify=1 00:19:16.495 verify=crc32c-intel 00:19:16.495 [job0] 00:19:16.495 filename=/dev/nvme0n1 00:19:16.495 [job1] 00:19:16.495 filename=/dev/nvme0n2 00:19:16.495 [job2] 00:19:16.495 filename=/dev/nvme0n3 00:19:16.495 [job3] 00:19:16.495 filename=/dev/nvme0n4 00:19:16.495 Could not set queue depth (nvme0n1) 00:19:16.495 Could not set queue depth (nvme0n2) 00:19:16.495 Could not set queue depth (nvme0n3) 00:19:16.495 Could not set queue depth (nvme0n4) 00:19:16.495 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.495 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.495 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.495 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.495 fio-3.35 00:19:16.495 Starting 4 threads 00:19:17.868 00:19:17.868 job0: (groupid=0, jobs=1): err= 0: pid=698650: Sat Jul 13 05:08:24 2024 00:19:17.868 read: IOPS=4052, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1006msec) 00:19:17.868 slat (usec): min=3, max=26098, avg=122.87, stdev=982.78 00:19:17.868 clat (usec): min=3117, max=53074, avg=17358.89, stdev=7600.96 00:19:17.868 lat (usec): min=9004, max=53086, avg=17481.76, stdev=7634.29 00:19:17.868 clat percentiles (usec): 00:19:17.868 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[11863], 20.00th=[12387], 00:19:17.868 | 30.00th=[12780], 40.00th=[13698], 50.00th=[15008], 60.00th=[16581], 00:19:17.868 | 70.00th=[17695], 80.00th=[20579], 90.00th=[25560], 95.00th=[33424], 00:19:17.868 | 99.00th=[48497], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:19:17.868 | 99.99th=[53216] 00:19:17.868 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:17.868 slat (usec): min=3, max=31549, avg=105.36, stdev=876.08 00:19:17.868 clat (usec): min=3482, max=38989, avg=13866.97, stdev=5504.11 00:19:17.868 lat (usec): min=3523, max=42374, avg=13972.32, stdev=5550.87 00:19:17.868 clat percentiles (usec): 00:19:17.868 | 1.00th=[ 4621], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8979], 00:19:17.868 | 30.00th=[10814], 40.00th=[12387], 50.00th=[13435], 60.00th=[13960], 00:19:17.868 | 70.00th=[15139], 80.00th=[16909], 90.00th=[20317], 95.00th=[25035], 00:19:17.868 | 99.00th=[33162], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:19:17.868 | 99.99th=[39060] 00:19:17.868 bw ( KiB/s): min=16384, max=16384, per=27.92%, avg=16384.00, stdev= 0.00, samples=2 00:19:17.868 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:17.868 lat (msec) : 4=0.06%, 10=13.59%, 20=70.72%, 50=15.31%, 100=0.32% 00:19:17.868 cpu : usr=6.17%, sys=9.85%, ctx=274, majf=0, minf=11 00:19:17.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.868 issued rwts: total=4077,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.869 job1: (groupid=0, jobs=1): err= 0: pid=698651: Sat Jul 13 05:08:24 2024 00:19:17.869 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:19:17.869 slat (usec): min=2, max=61694, avg=160.43, stdev=1521.23 00:19:17.869 clat (msec): min=6, max=115, avg=20.37, stdev=17.31 00:19:17.869 lat (msec): min=6, max=115, avg=20.53, stdev=17.41 00:19:17.869 clat percentiles (msec): 00:19:17.869 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:19:17.869 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:19:17.869 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 33], 95.00th=[ 55], 00:19:17.869 | 99.00th=[ 102], 99.50th=[ 116], 99.90th=[ 116], 99.95th=[ 116], 00:19:17.869 | 99.99th=[ 116] 00:19:17.869 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1012msec); 0 zone resets 00:19:17.869 slat (usec): min=3, max=18097, avg=96.99, stdev=632.61 00:19:17.869 clat (usec): min=4400, max=54980, avg=14217.99, stdev=6054.12 00:19:17.869 lat (usec): min=4409, max=54992, avg=14314.98, stdev=6095.28 00:19:17.869 clat percentiles (usec): 00:19:17.869 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[11338], 00:19:17.869 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13829], 60.00th=[13960], 00:19:17.869 | 70.00th=[14091], 80.00th=[14353], 90.00th=[20579], 95.00th=[22414], 00:19:17.869 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[53740], 00:19:17.869 | 99.99th=[54789] 00:19:17.869 bw ( KiB/s): min=12280, max=17664, per=25.51%, avg=14972.00, stdev=3807.06, samples=2 00:19:17.869 iops : min= 3070, max= 4416, avg=3743.00, stdev=951.77, samples=2 00:19:17.869 lat (msec) : 10=8.67%, 20=75.80%, 50=12.01%, 100=2.70%, 250=0.83% 00:19:17.869 cpu : usr=4.75%, sys=10.48%, ctx=388, majf=0, minf=11 00:19:17.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.869 issued rwts: total=3584,3870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.869 job2: (groupid=0, jobs=1): err= 0: pid=698655: Sat Jul 13 05:08:24 2024 00:19:17.869 read: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:19:17.869 slat (usec): min=3, max=9201, avg=133.72, stdev=678.80 00:19:17.869 clat (usec): min=1882, max=35826, avg=17188.51, stdev=3284.91 00:19:17.869 lat (usec): min=5677, max=35865, avg=17322.23, stdev=3330.11 00:19:17.869 clat percentiles (usec): 00:19:17.869 | 1.00th=[10290], 5.00th=[13566], 10.00th=[14353], 20.00th=[15401], 00:19:17.869 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16712], 60.00th=[16909], 00:19:17.869 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21103], 95.00th=[22938], 00:19:17.869 | 99.00th=[28443], 99.50th=[31851], 99.90th=[32113], 99.95th=[34866], 00:19:17.869 | 99.99th=[35914] 00:19:17.869 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:19:17.869 slat (usec): min=4, max=26274, avg=136.72, stdev=852.88 00:19:17.869 clat (usec): min=11505, max=65985, avg=18523.42, stdev=5463.40 00:19:17.869 lat (usec): min=11526, max=66037, avg=18660.14, stdev=5534.36 00:19:17.869 clat percentiles (usec): 00:19:17.869 | 1.00th=[12649], 5.00th=[15139], 10.00th=[16057], 20.00th=[16450], 00:19:17.869 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:19:17.869 | 70.00th=[17957], 80.00th=[18220], 90.00th=[20579], 95.00th=[26608], 00:19:17.869 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:19:17.869 | 99.99th=[65799] 00:19:17.869 bw ( KiB/s): min=13648, max=15024, per=24.43%, avg=14336.00, stdev=972.98, samples=2 00:19:17.869 iops : min= 3412, max= 3756, avg=3584.00, stdev=243.24, samples=2 00:19:17.869 lat (msec) : 2=0.01%, 10=0.37%, 20=86.48%, 50=13.12%, 100=0.01% 00:19:17.869 cpu : usr=6.37%, sys=10.66%, ctx=400, majf=0, minf=11 00:19:17.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.869 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.869 job3: (groupid=0, jobs=1): err= 0: pid=698661: Sat Jul 13 05:08:24 2024 00:19:17.869 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:17.869 slat (usec): min=2, max=21658, avg=143.34, stdev=819.98 00:19:17.869 clat (usec): min=10098, max=52645, avg=18585.60, stdev=4488.80 00:19:17.869 lat (usec): min=10119, max=52657, avg=18728.94, stdev=4491.32 00:19:17.869 clat percentiles (usec): 00:19:17.869 | 1.00th=[12256], 5.00th=[14615], 10.00th=[14877], 20.00th=[15270], 00:19:17.869 | 30.00th=[16450], 40.00th=[17433], 50.00th=[18220], 60.00th=[18744], 00:19:17.869 | 70.00th=[19268], 80.00th=[19530], 90.00th=[22938], 95.00th=[25822], 00:19:17.869 | 99.00th=[33424], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:19:17.869 | 99.99th=[52691] 00:19:17.869 write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1003msec); 0 zone resets 00:19:17.869 slat (usec): min=3, max=21640, avg=157.52, stdev=1062.77 00:19:17.869 clat (usec): min=534, max=80326, avg=21254.61, stdev=12926.83 00:19:17.869 lat (usec): min=993, max=80371, avg=21412.13, stdev=13023.71 00:19:17.869 clat percentiles (usec): 00:19:17.869 | 1.00th=[ 4555], 5.00th=[ 9503], 10.00th=[12649], 20.00th=[14222], 00:19:17.869 | 30.00th=[15533], 40.00th=[16450], 50.00th=[17171], 60.00th=[18482], 00:19:17.869 | 70.00th=[19268], 80.00th=[21627], 90.00th=[50070], 95.00th=[51643], 00:19:17.869 | 99.00th=[62653], 99.50th=[62653], 99.90th=[68682], 99.95th=[72877], 00:19:17.869 | 99.99th=[80217] 00:19:17.869 bw ( KiB/s): min=12288, max=13064, per=21.60%, avg=12676.00, stdev=548.71, samples=2 00:19:17.869 iops : min= 3072, max= 3266, avg=3169.00, stdev=137.18, samples=2 00:19:17.869 lat (usec) : 750=0.02%, 1000=0.03% 00:19:17.869 lat (msec) : 4=0.14%, 10=3.31%, 20=74.50%, 50=16.42%, 100=5.57% 00:19:17.869 cpu : usr=4.99%, sys=8.78%, ctx=349, majf=0, minf=17 00:19:17.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:17.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.869 issued rwts: total=3072,3297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.869 00:19:17.869 Run status group 0 (all jobs): 00:19:17.869 READ: bw=54.7MiB/s (57.4MB/s), 12.0MiB/s-15.8MiB/s (12.5MB/s-16.6MB/s), io=55.4MiB (58.1MB), run=1003-1012msec 00:19:17.869 WRITE: bw=57.3MiB/s (60.1MB/s), 12.8MiB/s-15.9MiB/s (13.5MB/s-16.7MB/s), io=58.0MiB (60.8MB), run=1003-1012msec 00:19:17.869 00:19:17.869 Disk stats (read/write): 00:19:17.869 nvme0n1: ios=3117/3584, merge=0/0, ticks=48339/46467, in_queue=94806, util=100.00% 00:19:17.869 nvme0n2: ios=2787/3072, merge=0/0, ticks=43996/36308, in_queue=80304, util=97.56% 00:19:17.869 nvme0n3: ios=2834/3072, merge=0/0, ticks=24719/26936, in_queue=51655, util=99.79% 00:19:17.869 nvme0n4: ios=2581/2703, merge=0/0, ticks=15544/22587, in_queue=38131, util=97.68% 00:19:17.869 05:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:17.869 05:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=698821 00:19:17.869 05:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:17.869 05:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:17.869 [global] 00:19:17.869 thread=1 00:19:17.869 invalidate=1 00:19:17.869 rw=read 00:19:17.869 time_based=1 00:19:17.869 runtime=10 00:19:17.869 ioengine=libaio 00:19:17.869 direct=1 00:19:17.869 bs=4096 00:19:17.869 iodepth=1 00:19:17.869 norandommap=1 00:19:17.869 numjobs=1 00:19:17.869 00:19:17.869 [job0] 00:19:17.869 filename=/dev/nvme0n1 00:19:17.869 [job1] 00:19:17.869 filename=/dev/nvme0n2 00:19:17.869 [job2] 00:19:17.869 filename=/dev/nvme0n3 00:19:17.869 [job3] 00:19:17.869 filename=/dev/nvme0n4 00:19:17.869 Could not set queue depth (nvme0n1) 00:19:17.869 Could not set queue depth (nvme0n2) 00:19:17.869 Could not set queue depth (nvme0n3) 00:19:17.869 Could not set queue depth (nvme0n4) 00:19:18.126 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.126 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.127 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.127 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.127 fio-3.35 00:19:18.127 Starting 4 threads 00:19:21.406 05:08:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:21.406 05:08:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:21.406 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18587648, buflen=4096 00:19:21.406 fio: pid=699001, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:21.406 05:08:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.406 05:08:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:21.406 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5156864, buflen=4096 00:19:21.406 fio: pid=699000, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:21.664 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1003520, buflen=4096 00:19:21.664 fio: pid=698998, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:21.664 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.664 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:21.922 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18993152, buflen=4096 00:19:21.922 fio: pid=698999, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:21.923 00:19:21.923 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=698998: Sat Jul 13 05:08:28 2024 00:19:21.923 read: IOPS=70, BW=282KiB/s (289kB/s)(980KiB/3477msec) 00:19:21.923 slat (usec): min=4, max=11835, avg=102.83, stdev=975.64 00:19:21.923 clat (usec): min=310, max=41870, avg=14035.98, stdev=19162.19 00:19:21.923 lat (usec): min=316, max=52940, avg=14139.18, stdev=19315.32 00:19:21.923 clat percentiles (usec): 00:19:21.923 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 359], 00:19:21.923 | 30.00th=[ 383], 40.00th=[ 420], 50.00th=[ 490], 60.00th=[ 562], 00:19:21.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:21.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:21.923 | 99.99th=[41681] 00:19:21.923 bw ( KiB/s): min= 104, max= 1216, per=2.74%, avg=312.00, stdev=443.84, samples=6 00:19:21.923 iops : min= 26, max= 304, avg=78.00, stdev=110.96, samples=6 00:19:21.923 lat (usec) : 500=50.81%, 750=14.63%, 1000=0.41% 00:19:21.923 lat (msec) : 10=0.41%, 50=33.33% 00:19:21.923 cpu : usr=0.09%, sys=0.09%, ctx=252, majf=0, minf=1 00:19:21.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 issued rwts: total=246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.923 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=698999: Sat Jul 13 05:08:28 2024 00:19:21.923 read: IOPS=1236, BW=4945KiB/s (5063kB/s)(18.1MiB/3751msec) 00:19:21.923 slat (usec): min=4, max=5472, avg=14.64, stdev=84.75 00:19:21.923 clat (usec): min=282, max=56019, avg=786.07, stdev=3915.12 00:19:21.923 lat (usec): min=288, max=56035, avg=800.70, stdev=3932.72 00:19:21.923 clat percentiles (usec): 00:19:21.923 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 367], 00:19:21.923 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 429], 00:19:21.923 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 502], 00:19:21.923 | 99.00th=[ 816], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:19:21.923 | 99.99th=[55837] 00:19:21.923 bw ( KiB/s): min= 96, max= 8976, per=45.65%, avg=5199.71, stdev=3868.78, samples=7 00:19:21.923 iops : min= 24, max= 2244, avg=1299.86, stdev=967.23, samples=7 00:19:21.923 lat (usec) : 500=94.74%, 750=4.18%, 1000=0.11% 00:19:21.923 lat (msec) : 2=0.02%, 50=0.91%, 100=0.02% 00:19:21.923 cpu : usr=1.20%, sys=2.03%, ctx=4642, majf=0, minf=1 00:19:21.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 issued rwts: total=4638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.923 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=699000: Sat Jul 13 05:08:28 2024 00:19:21.923 read: IOPS=393, BW=1571KiB/s (1609kB/s)(5036KiB/3205msec) 00:19:21.923 slat (usec): min=4, max=144, avg=11.03, stdev= 7.69 00:19:21.923 clat (usec): min=283, max=41661, avg=2513.39, stdev=8994.68 00:19:21.923 lat (usec): min=288, max=41676, avg=2524.41, stdev=8997.23 00:19:21.923 clat percentiles (usec): 00:19:21.923 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 326], 00:19:21.923 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 379], 60.00th=[ 392], 00:19:21.923 | 70.00th=[ 424], 80.00th=[ 478], 90.00th=[ 553], 95.00th=[40633], 00:19:21.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:21.923 | 99.99th=[41681] 00:19:21.923 bw ( KiB/s): min= 144, max= 3976, per=9.20%, avg=1048.00, stdev=1495.87, samples=6 00:19:21.923 iops : min= 36, max= 994, avg=262.00, stdev=373.97, samples=6 00:19:21.923 lat (usec) : 500=82.70%, 750=11.59%, 1000=0.24% 00:19:21.923 lat (msec) : 4=0.08%, 20=0.08%, 50=5.24% 00:19:21.923 cpu : usr=0.09%, sys=0.69%, ctx=1261, majf=0, minf=1 00:19:21.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.923 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=699001: Sat Jul 13 05:08:28 2024 00:19:21.923 read: IOPS=1546, BW=6185KiB/s (6333kB/s)(17.7MiB/2935msec) 00:19:21.923 slat (usec): min=4, max=154, avg=18.44, stdev=10.32 00:19:21.923 clat (usec): min=337, max=41417, avg=618.84, stdev=2882.64 00:19:21.923 lat (usec): min=344, max=41432, avg=637.27, stdev=2882.80 00:19:21.923 clat percentiles (usec): 00:19:21.923 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 379], 00:19:21.923 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 412], 00:19:21.923 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 469], 95.00th=[ 506], 00:19:21.923 | 99.00th=[ 603], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:19:21.923 | 99.99th=[41157] 00:19:21.923 bw ( KiB/s): min= 232, max= 9480, per=63.61%, avg=7244.80, stdev=3942.24, samples=5 00:19:21.923 iops : min= 58, max= 2370, avg=1811.20, stdev=985.56, samples=5 00:19:21.923 lat (usec) : 500=93.96%, 750=5.42%, 1000=0.07% 00:19:21.923 lat (msec) : 10=0.02%, 50=0.51% 00:19:21.923 cpu : usr=0.99%, sys=3.51%, ctx=4541, majf=0, minf=1 00:19:21.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.923 issued rwts: total=4539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.923 00:19:21.923 Run status group 0 (all jobs): 00:19:21.923 READ: bw=11.1MiB/s (11.7MB/s), 282KiB/s-6185KiB/s (289kB/s-6333kB/s), io=41.7MiB (43.7MB), run=2935-3751msec 00:19:21.923 00:19:21.923 Disk stats (read/write): 00:19:21.923 nvme0n1: ios=265/0, merge=0/0, ticks=3525/0, in_queue=3525, util=98.83% 00:19:21.923 nvme0n2: ios=4633/0, merge=0/0, ticks=3409/0, in_queue=3409, util=96.22% 00:19:21.923 nvme0n3: ios=1039/0, merge=0/0, ticks=3071/0, in_queue=3071, util=96.72% 00:19:21.923 nvme0n4: ios=4536/0, merge=0/0, ticks=2607/0, in_queue=2607, util=96.71% 00:19:21.923 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.923 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:22.492 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.492 05:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:22.750 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.750 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:23.008 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.008 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:23.266 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.266 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:23.524 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:23.524 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 698821 00:19:23.524 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:23.524 05:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:24.457 nvmf hotplug test: fio failed as expected 00:19:24.457 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.715 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:24.715 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:24.715 05:08:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.715 rmmod nvme_tcp 00:19:24.715 rmmod nvme_fabrics 00:19:24.715 rmmod nvme_keyring 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 696753 ']' 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 696753 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 696753 ']' 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 696753 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696753 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696753' 00:19:24.715 killing process with pid 696753 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 696753 00:19:24.715 05:08:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 696753 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.091 05:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.993 05:08:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.993 00:19:27.993 real 0m26.272s 00:19:27.993 user 1m29.406s 00:19:27.993 sys 0m7.489s 00:19:27.993 05:08:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.993 05:08:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.993 ************************************ 00:19:27.993 END TEST nvmf_fio_target 00:19:27.993 ************************************ 00:19:27.993 05:08:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.993 05:08:34 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.993 05:08:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.993 05:08:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.993 05:08:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.993 ************************************ 00:19:27.993 START TEST nvmf_bdevio 00:19:27.993 ************************************ 00:19:27.993 05:08:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:28.250 * Looking for test storage... 00:19:28.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.250 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.251 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.251 05:08:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.251 05:08:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.150 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.151 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:30.408 00:19:30.408 --- 10.0.0.2 ping statistics --- 00:19:30.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.408 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:19:30.408 00:19:30.408 --- 10.0.0.1 ping statistics --- 00:19:30.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.408 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=701847 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 701847 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 701847 ']' 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.408 05:08:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.409 [2024-07-13 05:08:36.772399] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:30.409 [2024-07-13 05:08:36.772526] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.409 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.666 [2024-07-13 05:08:36.910999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.666 [2024-07-13 05:08:37.149682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.666 [2024-07-13 05:08:37.149766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.666 [2024-07-13 05:08:37.149795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.666 [2024-07-13 05:08:37.149817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.666 [2024-07-13 05:08:37.149839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.666 [2024-07-13 05:08:37.149975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.666 [2024-07-13 05:08:37.150034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:30.666 [2024-07-13 05:08:37.150080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.666 [2024-07-13 05:08:37.150091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:31.231 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.231 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:31.231 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.231 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.231 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 [2024-07-13 05:08:37.756291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 Malloc0 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 [2024-07-13 05:08:37.860531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.489 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.489 { 00:19:31.489 "params": { 00:19:31.489 "name": "Nvme$subsystem", 00:19:31.489 "trtype": "$TEST_TRANSPORT", 00:19:31.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.489 "adrfam": "ipv4", 00:19:31.489 "trsvcid": "$NVMF_PORT", 00:19:31.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.489 "hdgst": ${hdgst:-false}, 00:19:31.489 "ddgst": ${ddgst:-false} 00:19:31.489 }, 00:19:31.489 "method": "bdev_nvme_attach_controller" 00:19:31.489 } 00:19:31.489 EOF 00:19:31.489 )") 00:19:31.490 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:31.490 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:31.490 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:31.490 05:08:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:31.490 "params": { 00:19:31.490 "name": "Nvme1", 00:19:31.490 "trtype": "tcp", 00:19:31.490 "traddr": "10.0.0.2", 00:19:31.490 "adrfam": "ipv4", 00:19:31.490 "trsvcid": "4420", 00:19:31.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.490 "hdgst": false, 00:19:31.490 "ddgst": false 00:19:31.490 }, 00:19:31.490 "method": "bdev_nvme_attach_controller" 00:19:31.490 }' 00:19:31.490 [2024-07-13 05:08:37.942639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:31.490 [2024-07-13 05:08:37.942779] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702047 ] 00:19:31.748 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.748 [2024-07-13 05:08:38.070857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:32.006 [2024-07-13 05:08:38.311738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.006 [2024-07-13 05:08:38.311782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.006 [2024-07-13 05:08:38.311790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.263 I/O targets: 00:19:32.263 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:32.263 00:19:32.263 00:19:32.263 CUnit - A unit testing framework for C - Version 2.1-3 00:19:32.264 http://cunit.sourceforge.net/ 00:19:32.264 00:19:32.264 00:19:32.264 Suite: bdevio tests on: Nvme1n1 00:19:32.521 Test: blockdev write read block ...passed 00:19:32.521 Test: blockdev write zeroes read block ...passed 00:19:32.521 Test: blockdev write zeroes read no split ...passed 00:19:32.521 Test: blockdev write zeroes read split ...passed 00:19:32.521 Test: blockdev write zeroes read split partial ...passed 00:19:32.521 Test: blockdev reset ...[2024-07-13 05:08:39.012993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:32.521 [2024-07-13 05:08:39.013186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:19:32.779 [2024-07-13 05:08:39.074951] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.779 passed 00:19:32.779 Test: blockdev write read 8 blocks ...passed 00:19:32.779 Test: blockdev write read size > 128k ...passed 00:19:32.779 Test: blockdev write read invalid size ...passed 00:19:32.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.779 Test: blockdev write read max offset ...passed 00:19:32.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.779 Test: blockdev writev readv 8 blocks ...passed 00:19:32.779 Test: blockdev writev readv 30 x 1block ...passed 00:19:33.037 Test: blockdev writev readv block ...passed 00:19:33.037 Test: blockdev writev readv size > 128k ...passed 00:19:33.037 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:33.037 Test: blockdev comparev and writev ...[2024-07-13 05:08:39.294564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.294647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.294692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.294721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.295229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.295265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.295299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.295324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.295811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.295845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.295915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.296401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.296435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.296468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.037 [2024-07-13 05:08:39.296506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:33.037 passed 00:19:33.037 Test: blockdev nvme passthru rw ...passed 00:19:33.037 Test: blockdev nvme passthru vendor specific ...[2024-07-13 05:08:39.380331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.037 [2024-07-13 05:08:39.380397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.380665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.037 [2024-07-13 05:08:39.380697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.380923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.037 [2024-07-13 05:08:39.380956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:33.037 [2024-07-13 05:08:39.381180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.037 [2024-07-13 05:08:39.381218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:33.037 passed 00:19:33.037 Test: blockdev nvme admin passthru ...passed 00:19:33.037 Test: blockdev copy ...passed 00:19:33.037 00:19:33.037 Run Summary: Type Total Ran Passed Failed Inactive 00:19:33.037 suites 1 1 n/a 0 0 00:19:33.037 tests 23 23 23 0 0 00:19:33.037 asserts 152 152 152 0 n/a 00:19:33.037 00:19:33.037 Elapsed time = 1.397 seconds 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.969 rmmod nvme_tcp 00:19:33.969 rmmod nvme_fabrics 00:19:33.969 rmmod nvme_keyring 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 701847 ']' 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 701847 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 701847 ']' 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 701847 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.969 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701847 00:19:34.227 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:34.227 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:34.227 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701847' 00:19:34.227 killing process with pid 701847 00:19:34.227 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 701847 00:19:34.227 05:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 701847 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.600 05:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.501 05:08:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:37.501 00:19:37.501 real 0m9.469s 00:19:37.501 user 0m22.993s 00:19:37.501 sys 0m2.343s 00:19:37.501 05:08:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:37.501 05:08:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:37.501 ************************************ 00:19:37.501 END TEST nvmf_bdevio 00:19:37.501 ************************************ 00:19:37.501 05:08:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:37.501 05:08:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:37.501 05:08:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:37.501 05:08:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.501 05:08:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.501 ************************************ 00:19:37.501 START TEST nvmf_auth_target 00:19:37.501 ************************************ 00:19:37.501 05:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:37.758 * Looking for test storage... 00:19:37.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.758 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.759 05:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:39.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.657 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:39.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:39.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:39.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.658 05:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:19:39.658 00:19:39.658 --- 10.0.0.2 ping statistics --- 00:19:39.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.658 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:39.658 00:19:39.658 --- 10.0.0.1 ping statistics --- 00:19:39.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.658 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=704386 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 704386 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 704386 ']' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.658 05:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.590 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.590 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:40.590 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.590 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.590 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=704536 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d0ba66d594d49fbcc46125559e525f486cb5363f557fd0fb 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uk7 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d0ba66d594d49fbcc46125559e525f486cb5363f557fd0fb 0 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d0ba66d594d49fbcc46125559e525f486cb5363f557fd0fb 0 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d0ba66d594d49fbcc46125559e525f486cb5363f557fd0fb 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uk7 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uk7 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uk7 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e7fe18aefc250d4ab65b1ab3bd7c945703b31d408a3073805d28869709bd6ec 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VEF 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e7fe18aefc250d4ab65b1ab3bd7c945703b31d408a3073805d28869709bd6ec 3 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e7fe18aefc250d4ab65b1ab3bd7c945703b31d408a3073805d28869709bd6ec 3 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e7fe18aefc250d4ab65b1ab3bd7c945703b31d408a3073805d28869709bd6ec 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VEF 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VEF 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.VEF 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0898fbf56b0f1e47628bdd860b2827af 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gZQ 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0898fbf56b0f1e47628bdd860b2827af 1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0898fbf56b0f1e47628bdd860b2827af 1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0898fbf56b0f1e47628bdd860b2827af 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gZQ 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gZQ 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.gZQ 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=268355a2be17be67edb302ddc3eb9493a537ee9b81a572e1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.l3E 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 268355a2be17be67edb302ddc3eb9493a537ee9b81a572e1 2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 268355a2be17be67edb302ddc3eb9493a537ee9b81a572e1 2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=268355a2be17be67edb302ddc3eb9493a537ee9b81a572e1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.l3E 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.l3E 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.l3E 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ead12fe810e9567ad8e87b799c22b5206a4fac54a689802a 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kYK 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ead12fe810e9567ad8e87b799c22b5206a4fac54a689802a 2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ead12fe810e9567ad8e87b799c22b5206a4fac54a689802a 2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ead12fe810e9567ad8e87b799c22b5206a4fac54a689802a 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kYK 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kYK 00:19:40.849 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.kYK 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5f48821fc1a65cffd6d8261e384a56d8 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.p8r 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5f48821fc1a65cffd6d8261e384a56d8 1 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5f48821fc1a65cffd6d8261e384a56d8 1 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5f48821fc1a65cffd6d8261e384a56d8 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.p8r 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.p8r 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.p8r 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1bff1c8abdc6b59355ebb6f15d4e0b6fd81e43d904bb920a0dc14a83ef135d1c 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bSg 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1bff1c8abdc6b59355ebb6f15d4e0b6fd81e43d904bb920a0dc14a83ef135d1c 3 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1bff1c8abdc6b59355ebb6f15d4e0b6fd81e43d904bb920a0dc14a83ef135d1c 3 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1bff1c8abdc6b59355ebb6f15d4e0b6fd81e43d904bb920a0dc14a83ef135d1c 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bSg 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bSg 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.bSg 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 704386 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 704386 ']' 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.108 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 704536 /var/tmp/host.sock 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 704536 ']' 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:41.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.366 05:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uk7 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uk7 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uk7 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.VEF ]] 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VEF 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VEF 00:19:42.301 05:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VEF 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gZQ 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gZQ 00:19:42.560 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gZQ 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.l3E ]] 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l3E 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l3E 00:19:42.818 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l3E 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kYK 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kYK 00:19:43.076 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kYK 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.p8r ]] 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8r 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8r 00:19:43.334 05:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8r 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bSg 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bSg 00:19:43.593 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bSg 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.851 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.109 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:44.109 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.109 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.110 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.677 00:19:44.677 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.677 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.677 05:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.677 { 00:19:44.677 "cntlid": 1, 00:19:44.677 "qid": 0, 00:19:44.677 "state": "enabled", 00:19:44.677 "thread": "nvmf_tgt_poll_group_000", 00:19:44.677 "listen_address": { 00:19:44.677 "trtype": "TCP", 00:19:44.677 "adrfam": "IPv4", 00:19:44.677 "traddr": "10.0.0.2", 00:19:44.677 "trsvcid": "4420" 00:19:44.677 }, 00:19:44.677 "peer_address": { 00:19:44.677 "trtype": "TCP", 00:19:44.677 "adrfam": "IPv4", 00:19:44.677 "traddr": "10.0.0.1", 00:19:44.677 "trsvcid": "60226" 00:19:44.677 }, 00:19:44.677 "auth": { 00:19:44.677 "state": "completed", 00:19:44.677 "digest": "sha256", 00:19:44.677 "dhgroup": "null" 00:19:44.677 } 00:19:44.677 } 00:19:44.677 ]' 00:19:44.677 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.935 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.194 05:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.128 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.386 05:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.643 00:19:46.643 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.643 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.643 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.900 { 00:19:46.900 "cntlid": 3, 00:19:46.900 "qid": 0, 00:19:46.900 "state": "enabled", 00:19:46.900 "thread": "nvmf_tgt_poll_group_000", 00:19:46.900 "listen_address": { 00:19:46.900 "trtype": "TCP", 00:19:46.900 "adrfam": "IPv4", 00:19:46.900 "traddr": "10.0.0.2", 00:19:46.900 "trsvcid": "4420" 00:19:46.900 }, 00:19:46.900 "peer_address": { 00:19:46.900 "trtype": "TCP", 00:19:46.900 "adrfam": "IPv4", 00:19:46.900 "traddr": "10.0.0.1", 00:19:46.900 "trsvcid": "60242" 00:19:46.900 }, 00:19:46.900 "auth": { 00:19:46.900 "state": "completed", 00:19:46.900 "digest": "sha256", 00:19:46.900 "dhgroup": "null" 00:19:46.900 } 00:19:46.900 } 00:19:46.900 ]' 00:19:46.900 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.156 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.414 05:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:19:48.346 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.347 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.605 05:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.863 00:19:48.863 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.863 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.863 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.121 { 00:19:49.121 "cntlid": 5, 00:19:49.121 "qid": 0, 00:19:49.121 "state": "enabled", 00:19:49.121 "thread": "nvmf_tgt_poll_group_000", 00:19:49.121 "listen_address": { 00:19:49.121 "trtype": "TCP", 00:19:49.121 "adrfam": "IPv4", 00:19:49.121 "traddr": "10.0.0.2", 00:19:49.121 "trsvcid": "4420" 00:19:49.121 }, 00:19:49.121 "peer_address": { 00:19:49.121 "trtype": "TCP", 00:19:49.121 "adrfam": "IPv4", 00:19:49.121 "traddr": "10.0.0.1", 00:19:49.121 "trsvcid": "60280" 00:19:49.121 }, 00:19:49.121 "auth": { 00:19:49.121 "state": "completed", 00:19:49.121 "digest": "sha256", 00:19:49.121 "dhgroup": "null" 00:19:49.121 } 00:19:49.121 } 00:19:49.121 ]' 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.121 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.442 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.442 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.442 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.700 05:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:19:50.636 05:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.637 05:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.637 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.204 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.204 { 00:19:51.204 "cntlid": 7, 00:19:51.204 "qid": 0, 00:19:51.204 "state": "enabled", 00:19:51.204 "thread": "nvmf_tgt_poll_group_000", 00:19:51.204 "listen_address": { 00:19:51.204 "trtype": "TCP", 00:19:51.204 "adrfam": "IPv4", 00:19:51.204 "traddr": "10.0.0.2", 00:19:51.204 "trsvcid": "4420" 00:19:51.204 }, 00:19:51.204 "peer_address": { 00:19:51.204 "trtype": "TCP", 00:19:51.204 "adrfam": "IPv4", 00:19:51.204 "traddr": "10.0.0.1", 00:19:51.204 "trsvcid": "60312" 00:19:51.204 }, 00:19:51.204 "auth": { 00:19:51.204 "state": "completed", 00:19:51.204 "digest": "sha256", 00:19:51.204 "dhgroup": "null" 00:19:51.204 } 00:19:51.204 } 00:19:51.204 ]' 00:19:51.204 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.462 05:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.720 05:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.653 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.911 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.169 00:19:53.169 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.169 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.169 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.427 { 00:19:53.427 "cntlid": 9, 00:19:53.427 "qid": 0, 00:19:53.427 "state": "enabled", 00:19:53.427 "thread": "nvmf_tgt_poll_group_000", 00:19:53.427 "listen_address": { 00:19:53.427 "trtype": "TCP", 00:19:53.427 "adrfam": "IPv4", 00:19:53.427 "traddr": "10.0.0.2", 00:19:53.427 "trsvcid": "4420" 00:19:53.427 }, 00:19:53.427 "peer_address": { 00:19:53.427 "trtype": "TCP", 00:19:53.427 "adrfam": "IPv4", 00:19:53.427 "traddr": "10.0.0.1", 00:19:53.427 "trsvcid": "40714" 00:19:53.427 }, 00:19:53.427 "auth": { 00:19:53.427 "state": "completed", 00:19:53.427 "digest": "sha256", 00:19:53.427 "dhgroup": "ffdhe2048" 00:19:53.427 } 00:19:53.427 } 00:19:53.427 ]' 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.427 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.685 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.685 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.685 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.685 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.685 05:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.942 05:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.876 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.133 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.391 00:19:55.391 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.391 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.391 05:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.649 { 00:19:55.649 "cntlid": 11, 00:19:55.649 "qid": 0, 00:19:55.649 "state": "enabled", 00:19:55.649 "thread": "nvmf_tgt_poll_group_000", 00:19:55.649 "listen_address": { 00:19:55.649 "trtype": "TCP", 00:19:55.649 "adrfam": "IPv4", 00:19:55.649 "traddr": "10.0.0.2", 00:19:55.649 "trsvcid": "4420" 00:19:55.649 }, 00:19:55.649 "peer_address": { 00:19:55.649 "trtype": "TCP", 00:19:55.649 "adrfam": "IPv4", 00:19:55.649 "traddr": "10.0.0.1", 00:19:55.649 "trsvcid": "40746" 00:19:55.649 }, 00:19:55.649 "auth": { 00:19:55.649 "state": "completed", 00:19:55.649 "digest": "sha256", 00:19:55.649 "dhgroup": "ffdhe2048" 00:19:55.649 } 00:19:55.649 } 00:19:55.649 ]' 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.649 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.907 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.907 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.907 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.907 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.907 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.165 05:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.095 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.659 00:19:57.659 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.659 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.659 05:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.659 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.659 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.659 05:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.659 05:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.916 { 00:19:57.916 "cntlid": 13, 00:19:57.916 "qid": 0, 00:19:57.916 "state": "enabled", 00:19:57.916 "thread": "nvmf_tgt_poll_group_000", 00:19:57.916 "listen_address": { 00:19:57.916 "trtype": "TCP", 00:19:57.916 "adrfam": "IPv4", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "trsvcid": "4420" 00:19:57.916 }, 00:19:57.916 "peer_address": { 00:19:57.916 "trtype": "TCP", 00:19:57.916 "adrfam": "IPv4", 00:19:57.916 "traddr": "10.0.0.1", 00:19:57.916 "trsvcid": "40762" 00:19:57.916 }, 00:19:57.916 "auth": { 00:19:57.916 "state": "completed", 00:19:57.916 "digest": "sha256", 00:19:57.916 "dhgroup": "ffdhe2048" 00:19:57.916 } 00:19:57.916 } 00:19:57.916 ]' 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.916 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.917 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.174 05:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.109 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.367 05:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.624 00:19:59.624 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.624 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.624 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.882 { 00:19:59.882 "cntlid": 15, 00:19:59.882 "qid": 0, 00:19:59.882 "state": "enabled", 00:19:59.882 "thread": "nvmf_tgt_poll_group_000", 00:19:59.882 "listen_address": { 00:19:59.882 "trtype": "TCP", 00:19:59.882 "adrfam": "IPv4", 00:19:59.882 "traddr": "10.0.0.2", 00:19:59.882 "trsvcid": "4420" 00:19:59.882 }, 00:19:59.882 "peer_address": { 00:19:59.882 "trtype": "TCP", 00:19:59.882 "adrfam": "IPv4", 00:19:59.882 "traddr": "10.0.0.1", 00:19:59.882 "trsvcid": "40794" 00:19:59.882 }, 00:19:59.882 "auth": { 00:19:59.882 "state": "completed", 00:19:59.882 "digest": "sha256", 00:19:59.882 "dhgroup": "ffdhe2048" 00:19:59.882 } 00:19:59.882 } 00:19:59.882 ]' 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.882 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.140 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.140 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.140 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.399 05:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.332 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.333 05:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.333 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.333 05:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.898 00:20:01.898 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.898 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.898 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.158 { 00:20:02.158 "cntlid": 17, 00:20:02.158 "qid": 0, 00:20:02.158 "state": "enabled", 00:20:02.158 "thread": "nvmf_tgt_poll_group_000", 00:20:02.158 "listen_address": { 00:20:02.158 "trtype": "TCP", 00:20:02.158 "adrfam": "IPv4", 00:20:02.158 "traddr": "10.0.0.2", 00:20:02.158 "trsvcid": "4420" 00:20:02.158 }, 00:20:02.158 "peer_address": { 00:20:02.158 "trtype": "TCP", 00:20:02.158 "adrfam": "IPv4", 00:20:02.158 "traddr": "10.0.0.1", 00:20:02.158 "trsvcid": "37640" 00:20:02.158 }, 00:20:02.158 "auth": { 00:20:02.158 "state": "completed", 00:20:02.158 "digest": "sha256", 00:20:02.158 "dhgroup": "ffdhe3072" 00:20:02.158 } 00:20:02.158 } 00:20:02.158 ]' 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.158 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.415 05:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.350 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.608 05:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.868 00:20:03.868 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.868 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.868 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.127 { 00:20:04.127 "cntlid": 19, 00:20:04.127 "qid": 0, 00:20:04.127 "state": "enabled", 00:20:04.127 "thread": "nvmf_tgt_poll_group_000", 00:20:04.127 "listen_address": { 00:20:04.127 "trtype": "TCP", 00:20:04.127 "adrfam": "IPv4", 00:20:04.127 "traddr": "10.0.0.2", 00:20:04.127 "trsvcid": "4420" 00:20:04.127 }, 00:20:04.127 "peer_address": { 00:20:04.127 "trtype": "TCP", 00:20:04.127 "adrfam": "IPv4", 00:20:04.127 "traddr": "10.0.0.1", 00:20:04.127 "trsvcid": "37668" 00:20:04.127 }, 00:20:04.127 "auth": { 00:20:04.127 "state": "completed", 00:20:04.127 "digest": "sha256", 00:20:04.127 "dhgroup": "ffdhe3072" 00:20:04.127 } 00:20:04.127 } 00:20:04.127 ]' 00:20:04.127 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.385 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.643 05:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.577 05:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.835 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.093 00:20:06.094 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.094 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.094 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.361 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.361 { 00:20:06.361 "cntlid": 21, 00:20:06.361 "qid": 0, 00:20:06.361 "state": "enabled", 00:20:06.361 "thread": "nvmf_tgt_poll_group_000", 00:20:06.361 "listen_address": { 00:20:06.361 "trtype": "TCP", 00:20:06.361 "adrfam": "IPv4", 00:20:06.361 "traddr": "10.0.0.2", 00:20:06.361 "trsvcid": "4420" 00:20:06.361 }, 00:20:06.361 "peer_address": { 00:20:06.361 "trtype": "TCP", 00:20:06.361 "adrfam": "IPv4", 00:20:06.362 "traddr": "10.0.0.1", 00:20:06.362 "trsvcid": "37706" 00:20:06.362 }, 00:20:06.362 "auth": { 00:20:06.362 "state": "completed", 00:20:06.362 "digest": "sha256", 00:20:06.362 "dhgroup": "ffdhe3072" 00:20:06.362 } 00:20:06.362 } 00:20:06.362 ]' 00:20:06.362 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.362 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.362 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.362 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.362 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.625 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.625 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.625 05:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.881 05:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.812 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.069 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.326 00:20:08.326 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.326 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.326 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.583 { 00:20:08.583 "cntlid": 23, 00:20:08.583 "qid": 0, 00:20:08.583 "state": "enabled", 00:20:08.583 "thread": "nvmf_tgt_poll_group_000", 00:20:08.583 "listen_address": { 00:20:08.583 "trtype": "TCP", 00:20:08.583 "adrfam": "IPv4", 00:20:08.583 "traddr": "10.0.0.2", 00:20:08.583 "trsvcid": "4420" 00:20:08.583 }, 00:20:08.583 "peer_address": { 00:20:08.583 "trtype": "TCP", 00:20:08.583 "adrfam": "IPv4", 00:20:08.583 "traddr": "10.0.0.1", 00:20:08.583 "trsvcid": "37724" 00:20:08.583 }, 00:20:08.583 "auth": { 00:20:08.583 "state": "completed", 00:20:08.583 "digest": "sha256", 00:20:08.583 "dhgroup": "ffdhe3072" 00:20:08.583 } 00:20:08.583 } 00:20:08.583 ]' 00:20:08.583 05:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.583 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.583 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.583 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.583 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.839 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.839 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.839 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.097 05:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.026 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.027 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.283 05:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.539 00:20:10.539 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.539 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.539 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.796 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.796 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.796 05:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.796 05:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.053 { 00:20:11.053 "cntlid": 25, 00:20:11.053 "qid": 0, 00:20:11.053 "state": "enabled", 00:20:11.053 "thread": "nvmf_tgt_poll_group_000", 00:20:11.053 "listen_address": { 00:20:11.053 "trtype": "TCP", 00:20:11.053 "adrfam": "IPv4", 00:20:11.053 "traddr": "10.0.0.2", 00:20:11.053 "trsvcid": "4420" 00:20:11.053 }, 00:20:11.053 "peer_address": { 00:20:11.053 "trtype": "TCP", 00:20:11.053 "adrfam": "IPv4", 00:20:11.053 "traddr": "10.0.0.1", 00:20:11.053 "trsvcid": "37744" 00:20:11.053 }, 00:20:11.053 "auth": { 00:20:11.053 "state": "completed", 00:20:11.053 "digest": "sha256", 00:20:11.053 "dhgroup": "ffdhe4096" 00:20:11.053 } 00:20:11.053 } 00:20:11.053 ]' 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.053 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.311 05:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.244 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.502 05:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.760 00:20:13.018 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.018 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.018 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.277 { 00:20:13.277 "cntlid": 27, 00:20:13.277 "qid": 0, 00:20:13.277 "state": "enabled", 00:20:13.277 "thread": "nvmf_tgt_poll_group_000", 00:20:13.277 "listen_address": { 00:20:13.277 "trtype": "TCP", 00:20:13.277 "adrfam": "IPv4", 00:20:13.277 "traddr": "10.0.0.2", 00:20:13.277 "trsvcid": "4420" 00:20:13.277 }, 00:20:13.277 "peer_address": { 00:20:13.277 "trtype": "TCP", 00:20:13.277 "adrfam": "IPv4", 00:20:13.277 "traddr": "10.0.0.1", 00:20:13.277 "trsvcid": "57064" 00:20:13.277 }, 00:20:13.277 "auth": { 00:20:13.277 "state": "completed", 00:20:13.277 "digest": "sha256", 00:20:13.277 "dhgroup": "ffdhe4096" 00:20:13.277 } 00:20:13.277 } 00:20:13.277 ]' 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.277 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.535 05:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.500 05:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.758 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:14.758 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.759 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.324 00:20:15.324 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.324 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.324 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.582 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.582 { 00:20:15.582 "cntlid": 29, 00:20:15.582 "qid": 0, 00:20:15.582 "state": "enabled", 00:20:15.582 "thread": "nvmf_tgt_poll_group_000", 00:20:15.583 "listen_address": { 00:20:15.583 "trtype": "TCP", 00:20:15.583 "adrfam": "IPv4", 00:20:15.583 "traddr": "10.0.0.2", 00:20:15.583 "trsvcid": "4420" 00:20:15.583 }, 00:20:15.583 "peer_address": { 00:20:15.583 "trtype": "TCP", 00:20:15.583 "adrfam": "IPv4", 00:20:15.583 "traddr": "10.0.0.1", 00:20:15.583 "trsvcid": "57090" 00:20:15.583 }, 00:20:15.583 "auth": { 00:20:15.583 "state": "completed", 00:20:15.583 "digest": "sha256", 00:20:15.583 "dhgroup": "ffdhe4096" 00:20:15.583 } 00:20:15.583 } 00:20:15.583 ]' 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.583 05:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.841 05:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.773 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.030 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.595 00:20:17.595 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.595 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.595 05:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.853 { 00:20:17.853 "cntlid": 31, 00:20:17.853 "qid": 0, 00:20:17.853 "state": "enabled", 00:20:17.853 "thread": "nvmf_tgt_poll_group_000", 00:20:17.853 "listen_address": { 00:20:17.853 "trtype": "TCP", 00:20:17.853 "adrfam": "IPv4", 00:20:17.853 "traddr": "10.0.0.2", 00:20:17.853 "trsvcid": "4420" 00:20:17.853 }, 00:20:17.853 "peer_address": { 00:20:17.853 "trtype": "TCP", 00:20:17.853 "adrfam": "IPv4", 00:20:17.853 "traddr": "10.0.0.1", 00:20:17.853 "trsvcid": "57108" 00:20:17.853 }, 00:20:17.853 "auth": { 00:20:17.853 "state": "completed", 00:20:17.853 "digest": "sha256", 00:20:17.853 "dhgroup": "ffdhe4096" 00:20:17.853 } 00:20:17.853 } 00:20:17.853 ]' 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.853 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.111 05:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.478 05:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.043 00:20:20.043 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.043 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.043 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.301 { 00:20:20.301 "cntlid": 33, 00:20:20.301 "qid": 0, 00:20:20.301 "state": "enabled", 00:20:20.301 "thread": "nvmf_tgt_poll_group_000", 00:20:20.301 "listen_address": { 00:20:20.301 "trtype": "TCP", 00:20:20.301 "adrfam": "IPv4", 00:20:20.301 "traddr": "10.0.0.2", 00:20:20.301 "trsvcid": "4420" 00:20:20.301 }, 00:20:20.301 "peer_address": { 00:20:20.301 "trtype": "TCP", 00:20:20.301 "adrfam": "IPv4", 00:20:20.301 "traddr": "10.0.0.1", 00:20:20.301 "trsvcid": "57150" 00:20:20.301 }, 00:20:20.301 "auth": { 00:20:20.301 "state": "completed", 00:20:20.301 "digest": "sha256", 00:20:20.301 "dhgroup": "ffdhe6144" 00:20:20.301 } 00:20:20.301 } 00:20:20.301 ]' 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.301 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.559 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.559 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.559 05:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.816 05:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.748 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.005 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.570 00:20:22.570 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.570 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.570 05:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.827 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.827 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.828 { 00:20:22.828 "cntlid": 35, 00:20:22.828 "qid": 0, 00:20:22.828 "state": "enabled", 00:20:22.828 "thread": "nvmf_tgt_poll_group_000", 00:20:22.828 "listen_address": { 00:20:22.828 "trtype": "TCP", 00:20:22.828 "adrfam": "IPv4", 00:20:22.828 "traddr": "10.0.0.2", 00:20:22.828 "trsvcid": "4420" 00:20:22.828 }, 00:20:22.828 "peer_address": { 00:20:22.828 "trtype": "TCP", 00:20:22.828 "adrfam": "IPv4", 00:20:22.828 "traddr": "10.0.0.1", 00:20:22.828 "trsvcid": "44588" 00:20:22.828 }, 00:20:22.828 "auth": { 00:20:22.828 "state": "completed", 00:20:22.828 "digest": "sha256", 00:20:22.828 "dhgroup": "ffdhe6144" 00:20:22.828 } 00:20:22.828 } 00:20:22.828 ]' 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.828 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.085 05:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.018 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.275 05:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.840 00:20:24.840 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.840 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.840 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.099 { 00:20:25.099 "cntlid": 37, 00:20:25.099 "qid": 0, 00:20:25.099 "state": "enabled", 00:20:25.099 "thread": "nvmf_tgt_poll_group_000", 00:20:25.099 "listen_address": { 00:20:25.099 "trtype": "TCP", 00:20:25.099 "adrfam": "IPv4", 00:20:25.099 "traddr": "10.0.0.2", 00:20:25.099 "trsvcid": "4420" 00:20:25.099 }, 00:20:25.099 "peer_address": { 00:20:25.099 "trtype": "TCP", 00:20:25.099 "adrfam": "IPv4", 00:20:25.099 "traddr": "10.0.0.1", 00:20:25.099 "trsvcid": "44612" 00:20:25.099 }, 00:20:25.099 "auth": { 00:20:25.099 "state": "completed", 00:20:25.099 "digest": "sha256", 00:20:25.099 "dhgroup": "ffdhe6144" 00:20:25.099 } 00:20:25.099 } 00:20:25.099 ]' 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.099 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.358 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.358 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.358 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.358 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.358 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.616 05:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.549 05:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.812 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.404 00:20:27.404 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.404 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.404 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.662 { 00:20:27.662 "cntlid": 39, 00:20:27.662 "qid": 0, 00:20:27.662 "state": "enabled", 00:20:27.662 "thread": "nvmf_tgt_poll_group_000", 00:20:27.662 "listen_address": { 00:20:27.662 "trtype": "TCP", 00:20:27.662 "adrfam": "IPv4", 00:20:27.662 "traddr": "10.0.0.2", 00:20:27.662 "trsvcid": "4420" 00:20:27.662 }, 00:20:27.662 "peer_address": { 00:20:27.662 "trtype": "TCP", 00:20:27.662 "adrfam": "IPv4", 00:20:27.662 "traddr": "10.0.0.1", 00:20:27.662 "trsvcid": "44648" 00:20:27.662 }, 00:20:27.662 "auth": { 00:20:27.662 "state": "completed", 00:20:27.662 "digest": "sha256", 00:20:27.662 "dhgroup": "ffdhe6144" 00:20:27.662 } 00:20:27.662 } 00:20:27.662 ]' 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.662 05:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.662 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.662 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.662 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.662 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.662 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.920 05:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.854 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.112 05:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.046 00:20:30.046 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.046 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.046 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.304 { 00:20:30.304 "cntlid": 41, 00:20:30.304 "qid": 0, 00:20:30.304 "state": "enabled", 00:20:30.304 "thread": "nvmf_tgt_poll_group_000", 00:20:30.304 "listen_address": { 00:20:30.304 "trtype": "TCP", 00:20:30.304 "adrfam": "IPv4", 00:20:30.304 "traddr": "10.0.0.2", 00:20:30.304 "trsvcid": "4420" 00:20:30.304 }, 00:20:30.304 "peer_address": { 00:20:30.304 "trtype": "TCP", 00:20:30.304 "adrfam": "IPv4", 00:20:30.304 "traddr": "10.0.0.1", 00:20:30.304 "trsvcid": "44674" 00:20:30.304 }, 00:20:30.304 "auth": { 00:20:30.304 "state": "completed", 00:20:30.304 "digest": "sha256", 00:20:30.304 "dhgroup": "ffdhe8192" 00:20:30.304 } 00:20:30.304 } 00:20:30.304 ]' 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.304 05:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.562 05:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.936 05:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.870 00:20:32.870 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.870 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.870 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.128 { 00:20:33.128 "cntlid": 43, 00:20:33.128 "qid": 0, 00:20:33.128 "state": "enabled", 00:20:33.128 "thread": "nvmf_tgt_poll_group_000", 00:20:33.128 "listen_address": { 00:20:33.128 "trtype": "TCP", 00:20:33.128 "adrfam": "IPv4", 00:20:33.128 "traddr": "10.0.0.2", 00:20:33.128 "trsvcid": "4420" 00:20:33.128 }, 00:20:33.128 "peer_address": { 00:20:33.128 "trtype": "TCP", 00:20:33.128 "adrfam": "IPv4", 00:20:33.128 "traddr": "10.0.0.1", 00:20:33.128 "trsvcid": "58852" 00:20:33.128 }, 00:20:33.128 "auth": { 00:20:33.128 "state": "completed", 00:20:33.128 "digest": "sha256", 00:20:33.128 "dhgroup": "ffdhe8192" 00:20:33.128 } 00:20:33.128 } 00:20:33.128 ]' 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.128 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.386 05:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.320 05:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.578 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.512 00:20:35.512 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.512 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.512 05:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.770 { 00:20:35.770 "cntlid": 45, 00:20:35.770 "qid": 0, 00:20:35.770 "state": "enabled", 00:20:35.770 "thread": "nvmf_tgt_poll_group_000", 00:20:35.770 "listen_address": { 00:20:35.770 "trtype": "TCP", 00:20:35.770 "adrfam": "IPv4", 00:20:35.770 "traddr": "10.0.0.2", 00:20:35.770 "trsvcid": "4420" 00:20:35.770 }, 00:20:35.770 "peer_address": { 00:20:35.770 "trtype": "TCP", 00:20:35.770 "adrfam": "IPv4", 00:20:35.770 "traddr": "10.0.0.1", 00:20:35.770 "trsvcid": "58878" 00:20:35.770 }, 00:20:35.770 "auth": { 00:20:35.770 "state": "completed", 00:20:35.770 "digest": "sha256", 00:20:35.770 "dhgroup": "ffdhe8192" 00:20:35.770 } 00:20:35.770 } 00:20:35.770 ]' 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.770 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.028 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.028 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.028 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.286 05:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.219 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.476 05:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.414 00:20:38.414 05:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.414 05:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.414 05:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.671 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.671 { 00:20:38.671 "cntlid": 47, 00:20:38.671 "qid": 0, 00:20:38.671 "state": "enabled", 00:20:38.672 "thread": "nvmf_tgt_poll_group_000", 00:20:38.672 "listen_address": { 00:20:38.672 "trtype": "TCP", 00:20:38.672 "adrfam": "IPv4", 00:20:38.672 "traddr": "10.0.0.2", 00:20:38.672 "trsvcid": "4420" 00:20:38.672 }, 00:20:38.672 "peer_address": { 00:20:38.672 "trtype": "TCP", 00:20:38.672 "adrfam": "IPv4", 00:20:38.672 "traddr": "10.0.0.1", 00:20:38.672 "trsvcid": "58908" 00:20:38.672 }, 00:20:38.672 "auth": { 00:20:38.672 "state": "completed", 00:20:38.672 "digest": "sha256", 00:20:38.672 "dhgroup": "ffdhe8192" 00:20:38.672 } 00:20:38.672 } 00:20:38.672 ]' 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.672 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.930 05:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:39.866 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.866 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.866 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.866 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.131 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.412 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.670 00:20:40.670 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.670 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.670 05:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.928 { 00:20:40.928 "cntlid": 49, 00:20:40.928 "qid": 0, 00:20:40.928 "state": "enabled", 00:20:40.928 "thread": "nvmf_tgt_poll_group_000", 00:20:40.928 "listen_address": { 00:20:40.928 "trtype": "TCP", 00:20:40.928 "adrfam": "IPv4", 00:20:40.928 "traddr": "10.0.0.2", 00:20:40.928 "trsvcid": "4420" 00:20:40.928 }, 00:20:40.928 "peer_address": { 00:20:40.928 "trtype": "TCP", 00:20:40.928 "adrfam": "IPv4", 00:20:40.928 "traddr": "10.0.0.1", 00:20:40.928 "trsvcid": "58926" 00:20:40.928 }, 00:20:40.928 "auth": { 00:20:40.928 "state": "completed", 00:20:40.928 "digest": "sha384", 00:20:40.928 "dhgroup": "null" 00:20:40.928 } 00:20:40.928 } 00:20:40.928 ]' 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.928 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.186 05:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.124 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.692 05:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.951 00:20:42.951 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.951 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.951 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.209 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.209 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.209 05:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.209 05:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.209 05:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.210 { 00:20:43.210 "cntlid": 51, 00:20:43.210 "qid": 0, 00:20:43.210 "state": "enabled", 00:20:43.210 "thread": "nvmf_tgt_poll_group_000", 00:20:43.210 "listen_address": { 00:20:43.210 "trtype": "TCP", 00:20:43.210 "adrfam": "IPv4", 00:20:43.210 "traddr": "10.0.0.2", 00:20:43.210 "trsvcid": "4420" 00:20:43.210 }, 00:20:43.210 "peer_address": { 00:20:43.210 "trtype": "TCP", 00:20:43.210 "adrfam": "IPv4", 00:20:43.210 "traddr": "10.0.0.1", 00:20:43.210 "trsvcid": "53230" 00:20:43.210 }, 00:20:43.210 "auth": { 00:20:43.210 "state": "completed", 00:20:43.210 "digest": "sha384", 00:20:43.210 "dhgroup": "null" 00:20:43.210 } 00:20:43.210 } 00:20:43.210 ]' 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.210 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.467 05:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.403 05:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.661 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.228 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.228 { 00:20:45.228 "cntlid": 53, 00:20:45.228 "qid": 0, 00:20:45.228 "state": "enabled", 00:20:45.228 "thread": "nvmf_tgt_poll_group_000", 00:20:45.228 "listen_address": { 00:20:45.228 "trtype": "TCP", 00:20:45.228 "adrfam": "IPv4", 00:20:45.228 "traddr": "10.0.0.2", 00:20:45.228 "trsvcid": "4420" 00:20:45.228 }, 00:20:45.228 "peer_address": { 00:20:45.228 "trtype": "TCP", 00:20:45.228 "adrfam": "IPv4", 00:20:45.228 "traddr": "10.0.0.1", 00:20:45.228 "trsvcid": "53266" 00:20:45.228 }, 00:20:45.228 "auth": { 00:20:45.228 "state": "completed", 00:20:45.228 "digest": "sha384", 00:20:45.228 "dhgroup": "null" 00:20:45.228 } 00:20:45.228 } 00:20:45.228 ]' 00:20:45.228 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.486 05:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.743 05:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.679 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.936 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.204 00:20:47.204 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.204 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.204 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.462 { 00:20:47.462 "cntlid": 55, 00:20:47.462 "qid": 0, 00:20:47.462 "state": "enabled", 00:20:47.462 "thread": "nvmf_tgt_poll_group_000", 00:20:47.462 "listen_address": { 00:20:47.462 "trtype": "TCP", 00:20:47.462 "adrfam": "IPv4", 00:20:47.462 "traddr": "10.0.0.2", 00:20:47.462 "trsvcid": "4420" 00:20:47.462 }, 00:20:47.462 "peer_address": { 00:20:47.462 "trtype": "TCP", 00:20:47.462 "adrfam": "IPv4", 00:20:47.462 "traddr": "10.0.0.1", 00:20:47.462 "trsvcid": "53290" 00:20:47.462 }, 00:20:47.462 "auth": { 00:20:47.462 "state": "completed", 00:20:47.462 "digest": "sha384", 00:20:47.462 "dhgroup": "null" 00:20:47.462 } 00:20:47.462 } 00:20:47.462 ]' 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:47.462 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.719 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.719 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.719 05:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.977 05:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.913 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.172 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.172 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.173 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.429 00:20:49.429 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.429 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.429 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.685 05:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.685 { 00:20:49.685 "cntlid": 57, 00:20:49.685 "qid": 0, 00:20:49.685 "state": "enabled", 00:20:49.685 "thread": "nvmf_tgt_poll_group_000", 00:20:49.685 "listen_address": { 00:20:49.685 "trtype": "TCP", 00:20:49.685 "adrfam": "IPv4", 00:20:49.685 "traddr": "10.0.0.2", 00:20:49.685 "trsvcid": "4420" 00:20:49.685 }, 00:20:49.685 "peer_address": { 00:20:49.685 "trtype": "TCP", 00:20:49.685 "adrfam": "IPv4", 00:20:49.685 "traddr": "10.0.0.1", 00:20:49.685 "trsvcid": "53324" 00:20:49.685 }, 00:20:49.685 "auth": { 00:20:49.685 "state": "completed", 00:20:49.685 "digest": "sha384", 00:20:49.685 "dhgroup": "ffdhe2048" 00:20:49.685 } 00:20:49.685 } 00:20:49.685 ]' 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.685 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.686 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.686 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.943 05:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.876 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.134 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.392 00:20:51.651 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.651 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.651 05:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.651 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.909 { 00:20:51.909 "cntlid": 59, 00:20:51.909 "qid": 0, 00:20:51.909 "state": "enabled", 00:20:51.909 "thread": "nvmf_tgt_poll_group_000", 00:20:51.909 "listen_address": { 00:20:51.909 "trtype": "TCP", 00:20:51.909 "adrfam": "IPv4", 00:20:51.909 "traddr": "10.0.0.2", 00:20:51.909 "trsvcid": "4420" 00:20:51.909 }, 00:20:51.909 "peer_address": { 00:20:51.909 "trtype": "TCP", 00:20:51.909 "adrfam": "IPv4", 00:20:51.909 "traddr": "10.0.0.1", 00:20:51.909 "trsvcid": "52472" 00:20:51.909 }, 00:20:51.909 "auth": { 00:20:51.909 "state": "completed", 00:20:51.909 "digest": "sha384", 00:20:51.909 "dhgroup": "ffdhe2048" 00:20:51.909 } 00:20:51.909 } 00:20:51.909 ]' 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.909 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.167 05:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.125 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.383 05:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.641 00:20:53.641 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.641 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.641 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.899 { 00:20:53.899 "cntlid": 61, 00:20:53.899 "qid": 0, 00:20:53.899 "state": "enabled", 00:20:53.899 "thread": "nvmf_tgt_poll_group_000", 00:20:53.899 "listen_address": { 00:20:53.899 "trtype": "TCP", 00:20:53.899 "adrfam": "IPv4", 00:20:53.899 "traddr": "10.0.0.2", 00:20:53.899 "trsvcid": "4420" 00:20:53.899 }, 00:20:53.899 "peer_address": { 00:20:53.899 "trtype": "TCP", 00:20:53.899 "adrfam": "IPv4", 00:20:53.899 "traddr": "10.0.0.1", 00:20:53.899 "trsvcid": "52496" 00:20:53.899 }, 00:20:53.899 "auth": { 00:20:53.899 "state": "completed", 00:20:53.899 "digest": "sha384", 00:20:53.899 "dhgroup": "ffdhe2048" 00:20:53.899 } 00:20:53.899 } 00:20:53.899 ]' 00:20:53.899 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.157 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.158 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.415 05:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.353 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.611 05:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.868 00:20:55.868 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.868 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.868 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.127 { 00:20:56.127 "cntlid": 63, 00:20:56.127 "qid": 0, 00:20:56.127 "state": "enabled", 00:20:56.127 "thread": "nvmf_tgt_poll_group_000", 00:20:56.127 "listen_address": { 00:20:56.127 "trtype": "TCP", 00:20:56.127 "adrfam": "IPv4", 00:20:56.127 "traddr": "10.0.0.2", 00:20:56.127 "trsvcid": "4420" 00:20:56.127 }, 00:20:56.127 "peer_address": { 00:20:56.127 "trtype": "TCP", 00:20:56.127 "adrfam": "IPv4", 00:20:56.127 "traddr": "10.0.0.1", 00:20:56.127 "trsvcid": "52522" 00:20:56.127 }, 00:20:56.127 "auth": { 00:20:56.127 "state": "completed", 00:20:56.127 "digest": "sha384", 00:20:56.127 "dhgroup": "ffdhe2048" 00:20:56.127 } 00:20:56.127 } 00:20:56.127 ]' 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.127 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.385 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.385 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.385 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.385 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.385 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.642 05:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.577 05:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.835 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.093 00:20:58.093 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.093 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.093 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.352 { 00:20:58.352 "cntlid": 65, 00:20:58.352 "qid": 0, 00:20:58.352 "state": "enabled", 00:20:58.352 "thread": "nvmf_tgt_poll_group_000", 00:20:58.352 "listen_address": { 00:20:58.352 "trtype": "TCP", 00:20:58.352 "adrfam": "IPv4", 00:20:58.352 "traddr": "10.0.0.2", 00:20:58.352 "trsvcid": "4420" 00:20:58.352 }, 00:20:58.352 "peer_address": { 00:20:58.352 "trtype": "TCP", 00:20:58.352 "adrfam": "IPv4", 00:20:58.352 "traddr": "10.0.0.1", 00:20:58.352 "trsvcid": "52548" 00:20:58.352 }, 00:20:58.352 "auth": { 00:20:58.352 "state": "completed", 00:20:58.352 "digest": "sha384", 00:20:58.352 "dhgroup": "ffdhe3072" 00:20:58.352 } 00:20:58.352 } 00:20:58.352 ]' 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.352 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.609 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.609 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.609 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.609 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.609 05:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.867 05:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.806 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.065 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.323 00:21:00.323 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.323 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.323 05:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.581 { 00:21:00.581 "cntlid": 67, 00:21:00.581 "qid": 0, 00:21:00.581 "state": "enabled", 00:21:00.581 "thread": "nvmf_tgt_poll_group_000", 00:21:00.581 "listen_address": { 00:21:00.581 "trtype": "TCP", 00:21:00.581 "adrfam": "IPv4", 00:21:00.581 "traddr": "10.0.0.2", 00:21:00.581 "trsvcid": "4420" 00:21:00.581 }, 00:21:00.581 "peer_address": { 00:21:00.581 "trtype": "TCP", 00:21:00.581 "adrfam": "IPv4", 00:21:00.581 "traddr": "10.0.0.1", 00:21:00.581 "trsvcid": "52576" 00:21:00.581 }, 00:21:00.581 "auth": { 00:21:00.581 "state": "completed", 00:21:00.581 "digest": "sha384", 00:21:00.581 "dhgroup": "ffdhe3072" 00:21:00.581 } 00:21:00.581 } 00:21:00.581 ]' 00:21:00.581 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.839 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.098 05:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.034 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.292 05:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.550 00:21:02.550 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.550 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.550 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.808 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.808 { 00:21:02.808 "cntlid": 69, 00:21:02.808 "qid": 0, 00:21:02.808 "state": "enabled", 00:21:02.808 "thread": "nvmf_tgt_poll_group_000", 00:21:02.808 "listen_address": { 00:21:02.808 "trtype": "TCP", 00:21:02.808 "adrfam": "IPv4", 00:21:02.808 "traddr": "10.0.0.2", 00:21:02.808 "trsvcid": "4420" 00:21:02.808 }, 00:21:02.808 "peer_address": { 00:21:02.808 "trtype": "TCP", 00:21:02.808 "adrfam": "IPv4", 00:21:02.808 "traddr": "10.0.0.1", 00:21:02.809 "trsvcid": "36854" 00:21:02.809 }, 00:21:02.809 "auth": { 00:21:02.809 "state": "completed", 00:21:02.809 "digest": "sha384", 00:21:02.809 "dhgroup": "ffdhe3072" 00:21:02.809 } 00:21:02.809 } 00:21:02.809 ]' 00:21:02.809 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.067 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.325 05:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.260 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.518 05:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.776 00:21:04.776 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.776 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.776 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.341 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.341 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.341 05:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.341 05:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.342 { 00:21:05.342 "cntlid": 71, 00:21:05.342 "qid": 0, 00:21:05.342 "state": "enabled", 00:21:05.342 "thread": "nvmf_tgt_poll_group_000", 00:21:05.342 "listen_address": { 00:21:05.342 "trtype": "TCP", 00:21:05.342 "adrfam": "IPv4", 00:21:05.342 "traddr": "10.0.0.2", 00:21:05.342 "trsvcid": "4420" 00:21:05.342 }, 00:21:05.342 "peer_address": { 00:21:05.342 "trtype": "TCP", 00:21:05.342 "adrfam": "IPv4", 00:21:05.342 "traddr": "10.0.0.1", 00:21:05.342 "trsvcid": "36890" 00:21:05.342 }, 00:21:05.342 "auth": { 00:21:05.342 "state": "completed", 00:21:05.342 "digest": "sha384", 00:21:05.342 "dhgroup": "ffdhe3072" 00:21:05.342 } 00:21:05.342 } 00:21:05.342 ]' 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.342 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.598 05:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.538 05:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.796 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.360 00:21:07.360 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.360 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.360 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.618 { 00:21:07.618 "cntlid": 73, 00:21:07.618 "qid": 0, 00:21:07.618 "state": "enabled", 00:21:07.618 "thread": "nvmf_tgt_poll_group_000", 00:21:07.618 "listen_address": { 00:21:07.618 "trtype": "TCP", 00:21:07.618 "adrfam": "IPv4", 00:21:07.618 "traddr": "10.0.0.2", 00:21:07.618 "trsvcid": "4420" 00:21:07.618 }, 00:21:07.618 "peer_address": { 00:21:07.618 "trtype": "TCP", 00:21:07.618 "adrfam": "IPv4", 00:21:07.618 "traddr": "10.0.0.1", 00:21:07.618 "trsvcid": "36932" 00:21:07.618 }, 00:21:07.618 "auth": { 00:21:07.618 "state": "completed", 00:21:07.618 "digest": "sha384", 00:21:07.618 "dhgroup": "ffdhe4096" 00:21:07.618 } 00:21:07.618 } 00:21:07.618 ]' 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.618 05:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.618 05:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.618 05:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.618 05:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.875 05:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.808 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.066 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.634 00:21:09.634 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.634 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.634 05:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.893 { 00:21:09.893 "cntlid": 75, 00:21:09.893 "qid": 0, 00:21:09.893 "state": "enabled", 00:21:09.893 "thread": "nvmf_tgt_poll_group_000", 00:21:09.893 "listen_address": { 00:21:09.893 "trtype": "TCP", 00:21:09.893 "adrfam": "IPv4", 00:21:09.893 "traddr": "10.0.0.2", 00:21:09.893 "trsvcid": "4420" 00:21:09.893 }, 00:21:09.893 "peer_address": { 00:21:09.893 "trtype": "TCP", 00:21:09.893 "adrfam": "IPv4", 00:21:09.893 "traddr": "10.0.0.1", 00:21:09.893 "trsvcid": "36964" 00:21:09.893 }, 00:21:09.893 "auth": { 00:21:09.893 "state": "completed", 00:21:09.893 "digest": "sha384", 00:21:09.893 "dhgroup": "ffdhe4096" 00:21:09.893 } 00:21:09.893 } 00:21:09.893 ]' 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.893 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.151 05:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.085 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.344 05:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.913 00:21:11.913 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.913 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.913 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.172 { 00:21:12.172 "cntlid": 77, 00:21:12.172 "qid": 0, 00:21:12.172 "state": "enabled", 00:21:12.172 "thread": "nvmf_tgt_poll_group_000", 00:21:12.172 "listen_address": { 00:21:12.172 "trtype": "TCP", 00:21:12.172 "adrfam": "IPv4", 00:21:12.172 "traddr": "10.0.0.2", 00:21:12.172 "trsvcid": "4420" 00:21:12.172 }, 00:21:12.172 "peer_address": { 00:21:12.172 "trtype": "TCP", 00:21:12.172 "adrfam": "IPv4", 00:21:12.172 "traddr": "10.0.0.1", 00:21:12.172 "trsvcid": "47880" 00:21:12.172 }, 00:21:12.172 "auth": { 00:21:12.172 "state": "completed", 00:21:12.172 "digest": "sha384", 00:21:12.172 "dhgroup": "ffdhe4096" 00:21:12.172 } 00:21:12.172 } 00:21:12.172 ]' 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.172 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.430 05:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:13.368 05:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.369 05:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.627 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.193 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.193 { 00:21:14.193 "cntlid": 79, 00:21:14.193 "qid": 0, 00:21:14.193 "state": "enabled", 00:21:14.193 "thread": "nvmf_tgt_poll_group_000", 00:21:14.193 "listen_address": { 00:21:14.193 "trtype": "TCP", 00:21:14.193 "adrfam": "IPv4", 00:21:14.193 "traddr": "10.0.0.2", 00:21:14.193 "trsvcid": "4420" 00:21:14.193 }, 00:21:14.193 "peer_address": { 00:21:14.193 "trtype": "TCP", 00:21:14.193 "adrfam": "IPv4", 00:21:14.193 "traddr": "10.0.0.1", 00:21:14.193 "trsvcid": "47914" 00:21:14.193 }, 00:21:14.193 "auth": { 00:21:14.193 "state": "completed", 00:21:14.193 "digest": "sha384", 00:21:14.193 "dhgroup": "ffdhe4096" 00:21:14.193 } 00:21:14.193 } 00:21:14.193 ]' 00:21:14.193 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.451 05:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.709 05:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:15.647 05:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.647 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.905 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.473 00:21:16.473 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.473 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.473 05:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.730 { 00:21:16.730 "cntlid": 81, 00:21:16.730 "qid": 0, 00:21:16.730 "state": "enabled", 00:21:16.730 "thread": "nvmf_tgt_poll_group_000", 00:21:16.730 "listen_address": { 00:21:16.730 "trtype": "TCP", 00:21:16.730 "adrfam": "IPv4", 00:21:16.730 "traddr": "10.0.0.2", 00:21:16.730 "trsvcid": "4420" 00:21:16.730 }, 00:21:16.730 "peer_address": { 00:21:16.730 "trtype": "TCP", 00:21:16.730 "adrfam": "IPv4", 00:21:16.730 "traddr": "10.0.0.1", 00:21:16.730 "trsvcid": "47934" 00:21:16.730 }, 00:21:16.730 "auth": { 00:21:16.730 "state": "completed", 00:21:16.730 "digest": "sha384", 00:21:16.730 "dhgroup": "ffdhe6144" 00:21:16.730 } 00:21:16.730 } 00:21:16.730 ]' 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.730 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.989 05:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.937 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.195 05:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.762 00:21:18.762 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.762 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.762 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.020 { 00:21:19.020 "cntlid": 83, 00:21:19.020 "qid": 0, 00:21:19.020 "state": "enabled", 00:21:19.020 "thread": "nvmf_tgt_poll_group_000", 00:21:19.020 "listen_address": { 00:21:19.020 "trtype": "TCP", 00:21:19.020 "adrfam": "IPv4", 00:21:19.020 "traddr": "10.0.0.2", 00:21:19.020 "trsvcid": "4420" 00:21:19.020 }, 00:21:19.020 "peer_address": { 00:21:19.020 "trtype": "TCP", 00:21:19.020 "adrfam": "IPv4", 00:21:19.020 "traddr": "10.0.0.1", 00:21:19.020 "trsvcid": "47964" 00:21:19.020 }, 00:21:19.020 "auth": { 00:21:19.020 "state": "completed", 00:21:19.020 "digest": "sha384", 00:21:19.020 "dhgroup": "ffdhe6144" 00:21:19.020 } 00:21:19.020 } 00:21:19.020 ]' 00:21:19.020 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.277 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.535 05:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.468 05:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.726 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.291 00:21:21.291 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.291 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.291 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.549 { 00:21:21.549 "cntlid": 85, 00:21:21.549 "qid": 0, 00:21:21.549 "state": "enabled", 00:21:21.549 "thread": "nvmf_tgt_poll_group_000", 00:21:21.549 "listen_address": { 00:21:21.549 "trtype": "TCP", 00:21:21.549 "adrfam": "IPv4", 00:21:21.549 "traddr": "10.0.0.2", 00:21:21.549 "trsvcid": "4420" 00:21:21.549 }, 00:21:21.549 "peer_address": { 00:21:21.549 "trtype": "TCP", 00:21:21.549 "adrfam": "IPv4", 00:21:21.549 "traddr": "10.0.0.1", 00:21:21.549 "trsvcid": "47990" 00:21:21.549 }, 00:21:21.549 "auth": { 00:21:21.549 "state": "completed", 00:21:21.549 "digest": "sha384", 00:21:21.549 "dhgroup": "ffdhe6144" 00:21:21.549 } 00:21:21.549 } 00:21:21.549 ]' 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.549 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.550 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.550 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.550 05:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.550 05:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.550 05:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.550 05:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.808 05:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.180 05:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.746 00:21:23.746 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.746 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.746 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.004 { 00:21:24.004 "cntlid": 87, 00:21:24.004 "qid": 0, 00:21:24.004 "state": "enabled", 00:21:24.004 "thread": "nvmf_tgt_poll_group_000", 00:21:24.004 "listen_address": { 00:21:24.004 "trtype": "TCP", 00:21:24.004 "adrfam": "IPv4", 00:21:24.004 "traddr": "10.0.0.2", 00:21:24.004 "trsvcid": "4420" 00:21:24.004 }, 00:21:24.004 "peer_address": { 00:21:24.004 "trtype": "TCP", 00:21:24.004 "adrfam": "IPv4", 00:21:24.004 "traddr": "10.0.0.1", 00:21:24.004 "trsvcid": "32840" 00:21:24.004 }, 00:21:24.004 "auth": { 00:21:24.004 "state": "completed", 00:21:24.004 "digest": "sha384", 00:21:24.004 "dhgroup": "ffdhe6144" 00:21:24.004 } 00:21:24.004 } 00:21:24.004 ]' 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.004 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.261 05:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.192 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.451 05:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.385 00:21:26.385 05:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.385 05:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.385 05:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.641 { 00:21:26.641 "cntlid": 89, 00:21:26.641 "qid": 0, 00:21:26.641 "state": "enabled", 00:21:26.641 "thread": "nvmf_tgt_poll_group_000", 00:21:26.641 "listen_address": { 00:21:26.641 "trtype": "TCP", 00:21:26.641 "adrfam": "IPv4", 00:21:26.641 "traddr": "10.0.0.2", 00:21:26.641 "trsvcid": "4420" 00:21:26.641 }, 00:21:26.641 "peer_address": { 00:21:26.641 "trtype": "TCP", 00:21:26.641 "adrfam": "IPv4", 00:21:26.641 "traddr": "10.0.0.1", 00:21:26.641 "trsvcid": "32862" 00:21:26.641 }, 00:21:26.641 "auth": { 00:21:26.641 "state": "completed", 00:21:26.641 "digest": "sha384", 00:21:26.641 "dhgroup": "ffdhe8192" 00:21:26.641 } 00:21:26.641 } 00:21:26.641 ]' 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.641 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.897 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.897 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.897 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.154 05:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.087 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.344 05:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.276 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.276 { 00:21:29.276 "cntlid": 91, 00:21:29.276 "qid": 0, 00:21:29.276 "state": "enabled", 00:21:29.276 "thread": "nvmf_tgt_poll_group_000", 00:21:29.276 "listen_address": { 00:21:29.276 "trtype": "TCP", 00:21:29.276 "adrfam": "IPv4", 00:21:29.276 "traddr": "10.0.0.2", 00:21:29.276 "trsvcid": "4420" 00:21:29.276 }, 00:21:29.276 "peer_address": { 00:21:29.276 "trtype": "TCP", 00:21:29.276 "adrfam": "IPv4", 00:21:29.276 "traddr": "10.0.0.1", 00:21:29.276 "trsvcid": "32900" 00:21:29.276 }, 00:21:29.276 "auth": { 00:21:29.276 "state": "completed", 00:21:29.276 "digest": "sha384", 00:21:29.276 "dhgroup": "ffdhe8192" 00:21:29.276 } 00:21:29.276 } 00:21:29.276 ]' 00:21:29.276 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.553 05:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.810 05:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.741 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.003 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.004 05:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.004 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.004 05:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.933 00:21:31.933 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.933 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.933 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.190 { 00:21:32.190 "cntlid": 93, 00:21:32.190 "qid": 0, 00:21:32.190 "state": "enabled", 00:21:32.190 "thread": "nvmf_tgt_poll_group_000", 00:21:32.190 "listen_address": { 00:21:32.190 "trtype": "TCP", 00:21:32.190 "adrfam": "IPv4", 00:21:32.190 "traddr": "10.0.0.2", 00:21:32.190 "trsvcid": "4420" 00:21:32.190 }, 00:21:32.190 "peer_address": { 00:21:32.190 "trtype": "TCP", 00:21:32.190 "adrfam": "IPv4", 00:21:32.190 "traddr": "10.0.0.1", 00:21:32.190 "trsvcid": "32934" 00:21:32.190 }, 00:21:32.190 "auth": { 00:21:32.190 "state": "completed", 00:21:32.190 "digest": "sha384", 00:21:32.190 "dhgroup": "ffdhe8192" 00:21:32.190 } 00:21:32.190 } 00:21:32.190 ]' 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.190 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.447 05:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.378 05:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.635 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.568 00:21:34.568 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.568 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.568 05:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.826 { 00:21:34.826 "cntlid": 95, 00:21:34.826 "qid": 0, 00:21:34.826 "state": "enabled", 00:21:34.826 "thread": "nvmf_tgt_poll_group_000", 00:21:34.826 "listen_address": { 00:21:34.826 "trtype": "TCP", 00:21:34.826 "adrfam": "IPv4", 00:21:34.826 "traddr": "10.0.0.2", 00:21:34.826 "trsvcid": "4420" 00:21:34.826 }, 00:21:34.826 "peer_address": { 00:21:34.826 "trtype": "TCP", 00:21:34.826 "adrfam": "IPv4", 00:21:34.826 "traddr": "10.0.0.1", 00:21:34.826 "trsvcid": "55152" 00:21:34.826 }, 00:21:34.826 "auth": { 00:21:34.826 "state": "completed", 00:21:34.826 "digest": "sha384", 00:21:34.826 "dhgroup": "ffdhe8192" 00:21:34.826 } 00:21:34.826 } 00:21:34.826 ]' 00:21:34.826 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.084 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.342 05:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.274 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.531 05:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.788 00:21:36.788 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.788 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.788 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.045 { 00:21:37.045 "cntlid": 97, 00:21:37.045 "qid": 0, 00:21:37.045 "state": "enabled", 00:21:37.045 "thread": "nvmf_tgt_poll_group_000", 00:21:37.045 "listen_address": { 00:21:37.045 "trtype": "TCP", 00:21:37.045 "adrfam": "IPv4", 00:21:37.045 "traddr": "10.0.0.2", 00:21:37.045 "trsvcid": "4420" 00:21:37.045 }, 00:21:37.045 "peer_address": { 00:21:37.045 "trtype": "TCP", 00:21:37.045 "adrfam": "IPv4", 00:21:37.045 "traddr": "10.0.0.1", 00:21:37.045 "trsvcid": "55186" 00:21:37.045 }, 00:21:37.045 "auth": { 00:21:37.045 "state": "completed", 00:21:37.045 "digest": "sha512", 00:21:37.045 "dhgroup": "null" 00:21:37.045 } 00:21:37.045 } 00:21:37.045 ]' 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.045 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.303 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.303 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.303 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.303 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.303 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.561 05:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.495 05:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:38.753 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.754 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.012 00:21:39.012 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.012 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.012 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.270 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.271 { 00:21:39.271 "cntlid": 99, 00:21:39.271 "qid": 0, 00:21:39.271 "state": "enabled", 00:21:39.271 "thread": "nvmf_tgt_poll_group_000", 00:21:39.271 "listen_address": { 00:21:39.271 "trtype": "TCP", 00:21:39.271 "adrfam": "IPv4", 00:21:39.271 "traddr": "10.0.0.2", 00:21:39.271 "trsvcid": "4420" 00:21:39.271 }, 00:21:39.271 "peer_address": { 00:21:39.271 "trtype": "TCP", 00:21:39.271 "adrfam": "IPv4", 00:21:39.271 "traddr": "10.0.0.1", 00:21:39.271 "trsvcid": "55220" 00:21:39.271 }, 00:21:39.271 "auth": { 00:21:39.271 "state": "completed", 00:21:39.271 "digest": "sha512", 00:21:39.271 "dhgroup": "null" 00:21:39.271 } 00:21:39.271 } 00:21:39.271 ]' 00:21:39.271 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.529 05:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.788 05:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.720 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.978 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.236 00:21:41.236 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.236 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.236 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.494 { 00:21:41.494 "cntlid": 101, 00:21:41.494 "qid": 0, 00:21:41.494 "state": "enabled", 00:21:41.494 "thread": "nvmf_tgt_poll_group_000", 00:21:41.494 "listen_address": { 00:21:41.494 "trtype": "TCP", 00:21:41.494 "adrfam": "IPv4", 00:21:41.494 "traddr": "10.0.0.2", 00:21:41.494 "trsvcid": "4420" 00:21:41.494 }, 00:21:41.494 "peer_address": { 00:21:41.494 "trtype": "TCP", 00:21:41.494 "adrfam": "IPv4", 00:21:41.494 "traddr": "10.0.0.1", 00:21:41.494 "trsvcid": "55242" 00:21:41.494 }, 00:21:41.494 "auth": { 00:21:41.494 "state": "completed", 00:21:41.494 "digest": "sha512", 00:21:41.494 "dhgroup": "null" 00:21:41.494 } 00:21:41.494 } 00:21:41.494 ]' 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:41.494 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.753 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.753 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.753 05:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.753 05:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.711 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.969 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.228 05:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.228 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.228 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.528 00:21:43.528 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.528 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.528 05:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.786 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.786 { 00:21:43.786 "cntlid": 103, 00:21:43.786 "qid": 0, 00:21:43.786 "state": "enabled", 00:21:43.786 "thread": "nvmf_tgt_poll_group_000", 00:21:43.786 "listen_address": { 00:21:43.786 "trtype": "TCP", 00:21:43.786 "adrfam": "IPv4", 00:21:43.786 "traddr": "10.0.0.2", 00:21:43.786 "trsvcid": "4420" 00:21:43.786 }, 00:21:43.786 "peer_address": { 00:21:43.786 "trtype": "TCP", 00:21:43.786 "adrfam": "IPv4", 00:21:43.786 "traddr": "10.0.0.1", 00:21:43.786 "trsvcid": "55668" 00:21:43.787 }, 00:21:43.787 "auth": { 00:21:43.787 "state": "completed", 00:21:43.787 "digest": "sha512", 00:21:43.787 "dhgroup": "null" 00:21:43.787 } 00:21:43.787 } 00:21:43.787 ]' 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.787 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.044 05:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.978 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.236 05:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.800 00:21:45.800 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.800 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.800 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.058 { 00:21:46.058 "cntlid": 105, 00:21:46.058 "qid": 0, 00:21:46.058 "state": "enabled", 00:21:46.058 "thread": "nvmf_tgt_poll_group_000", 00:21:46.058 "listen_address": { 00:21:46.058 "trtype": "TCP", 00:21:46.058 "adrfam": "IPv4", 00:21:46.058 "traddr": "10.0.0.2", 00:21:46.058 "trsvcid": "4420" 00:21:46.058 }, 00:21:46.058 "peer_address": { 00:21:46.058 "trtype": "TCP", 00:21:46.058 "adrfam": "IPv4", 00:21:46.058 "traddr": "10.0.0.1", 00:21:46.058 "trsvcid": "55690" 00:21:46.058 }, 00:21:46.058 "auth": { 00:21:46.058 "state": "completed", 00:21:46.058 "digest": "sha512", 00:21:46.058 "dhgroup": "ffdhe2048" 00:21:46.058 } 00:21:46.058 } 00:21:46.058 ]' 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.058 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.315 05:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.250 05:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.815 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.073 00:21:48.073 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.073 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.073 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.331 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.331 { 00:21:48.331 "cntlid": 107, 00:21:48.331 "qid": 0, 00:21:48.331 "state": "enabled", 00:21:48.331 "thread": "nvmf_tgt_poll_group_000", 00:21:48.331 "listen_address": { 00:21:48.331 "trtype": "TCP", 00:21:48.331 "adrfam": "IPv4", 00:21:48.331 "traddr": "10.0.0.2", 00:21:48.331 "trsvcid": "4420" 00:21:48.331 }, 00:21:48.331 "peer_address": { 00:21:48.331 "trtype": "TCP", 00:21:48.331 "adrfam": "IPv4", 00:21:48.331 "traddr": "10.0.0.1", 00:21:48.331 "trsvcid": "55720" 00:21:48.331 }, 00:21:48.331 "auth": { 00:21:48.331 "state": "completed", 00:21:48.331 "digest": "sha512", 00:21:48.332 "dhgroup": "ffdhe2048" 00:21:48.332 } 00:21:48.332 } 00:21:48.332 ]' 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.332 05:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.590 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.523 05:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.781 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.346 00:21:50.346 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.346 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.346 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.604 { 00:21:50.604 "cntlid": 109, 00:21:50.604 "qid": 0, 00:21:50.604 "state": "enabled", 00:21:50.604 "thread": "nvmf_tgt_poll_group_000", 00:21:50.604 "listen_address": { 00:21:50.604 "trtype": "TCP", 00:21:50.604 "adrfam": "IPv4", 00:21:50.604 "traddr": "10.0.0.2", 00:21:50.604 "trsvcid": "4420" 00:21:50.604 }, 00:21:50.604 "peer_address": { 00:21:50.604 "trtype": "TCP", 00:21:50.604 "adrfam": "IPv4", 00:21:50.604 "traddr": "10.0.0.1", 00:21:50.604 "trsvcid": "55748" 00:21:50.604 }, 00:21:50.604 "auth": { 00:21:50.604 "state": "completed", 00:21:50.604 "digest": "sha512", 00:21:50.604 "dhgroup": "ffdhe2048" 00:21:50.604 } 00:21:50.604 } 00:21:50.604 ]' 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.604 05:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.862 05:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.794 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.052 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.310 00:21:52.310 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.310 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.310 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.569 { 00:21:52.569 "cntlid": 111, 00:21:52.569 "qid": 0, 00:21:52.569 "state": "enabled", 00:21:52.569 "thread": "nvmf_tgt_poll_group_000", 00:21:52.569 "listen_address": { 00:21:52.569 "trtype": "TCP", 00:21:52.569 "adrfam": "IPv4", 00:21:52.569 "traddr": "10.0.0.2", 00:21:52.569 "trsvcid": "4420" 00:21:52.569 }, 00:21:52.569 "peer_address": { 00:21:52.569 "trtype": "TCP", 00:21:52.569 "adrfam": "IPv4", 00:21:52.569 "traddr": "10.0.0.1", 00:21:52.569 "trsvcid": "37902" 00:21:52.569 }, 00:21:52.569 "auth": { 00:21:52.569 "state": "completed", 00:21:52.569 "digest": "sha512", 00:21:52.569 "dhgroup": "ffdhe2048" 00:21:52.569 } 00:21:52.569 } 00:21:52.569 ]' 00:21:52.569 05:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.569 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.569 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.569 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.569 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.827 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.827 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.827 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.085 05:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.017 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.274 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.533 00:21:54.533 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.533 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.533 05:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.790 { 00:21:54.790 "cntlid": 113, 00:21:54.790 "qid": 0, 00:21:54.790 "state": "enabled", 00:21:54.790 "thread": "nvmf_tgt_poll_group_000", 00:21:54.790 "listen_address": { 00:21:54.790 "trtype": "TCP", 00:21:54.790 "adrfam": "IPv4", 00:21:54.790 "traddr": "10.0.0.2", 00:21:54.790 "trsvcid": "4420" 00:21:54.790 }, 00:21:54.790 "peer_address": { 00:21:54.790 "trtype": "TCP", 00:21:54.790 "adrfam": "IPv4", 00:21:54.790 "traddr": "10.0.0.1", 00:21:54.790 "trsvcid": "37930" 00:21:54.790 }, 00:21:54.790 "auth": { 00:21:54.790 "state": "completed", 00:21:54.790 "digest": "sha512", 00:21:54.790 "dhgroup": "ffdhe3072" 00:21:54.790 } 00:21:54.790 } 00:21:54.790 ]' 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.790 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.355 05:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.289 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.545 05:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.802 00:21:56.802 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.802 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.802 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.060 { 00:21:57.060 "cntlid": 115, 00:21:57.060 "qid": 0, 00:21:57.060 "state": "enabled", 00:21:57.060 "thread": "nvmf_tgt_poll_group_000", 00:21:57.060 "listen_address": { 00:21:57.060 "trtype": "TCP", 00:21:57.060 "adrfam": "IPv4", 00:21:57.060 "traddr": "10.0.0.2", 00:21:57.060 "trsvcid": "4420" 00:21:57.060 }, 00:21:57.060 "peer_address": { 00:21:57.060 "trtype": "TCP", 00:21:57.060 "adrfam": "IPv4", 00:21:57.060 "traddr": "10.0.0.1", 00:21:57.060 "trsvcid": "37954" 00:21:57.060 }, 00:21:57.060 "auth": { 00:21:57.060 "state": "completed", 00:21:57.060 "digest": "sha512", 00:21:57.060 "dhgroup": "ffdhe3072" 00:21:57.060 } 00:21:57.060 } 00:21:57.060 ]' 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.060 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.318 05:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.690 05:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.690 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.948 00:21:58.948 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.948 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.948 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.206 { 00:21:59.206 "cntlid": 117, 00:21:59.206 "qid": 0, 00:21:59.206 "state": "enabled", 00:21:59.206 "thread": "nvmf_tgt_poll_group_000", 00:21:59.206 "listen_address": { 00:21:59.206 "trtype": "TCP", 00:21:59.206 "adrfam": "IPv4", 00:21:59.206 "traddr": "10.0.0.2", 00:21:59.206 "trsvcid": "4420" 00:21:59.206 }, 00:21:59.206 "peer_address": { 00:21:59.206 "trtype": "TCP", 00:21:59.206 "adrfam": "IPv4", 00:21:59.206 "traddr": "10.0.0.1", 00:21:59.206 "trsvcid": "37996" 00:21:59.206 }, 00:21:59.206 "auth": { 00:21:59.206 "state": "completed", 00:21:59.206 "digest": "sha512", 00:21:59.206 "dhgroup": "ffdhe3072" 00:21:59.206 } 00:21:59.206 } 00:21:59.206 ]' 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.206 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.464 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.464 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.464 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.464 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.464 05:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.722 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.656 05:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.914 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.915 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.173 00:22:01.173 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.173 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.173 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.431 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.431 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.432 { 00:22:01.432 "cntlid": 119, 00:22:01.432 "qid": 0, 00:22:01.432 "state": "enabled", 00:22:01.432 "thread": "nvmf_tgt_poll_group_000", 00:22:01.432 "listen_address": { 00:22:01.432 "trtype": "TCP", 00:22:01.432 "adrfam": "IPv4", 00:22:01.432 "traddr": "10.0.0.2", 00:22:01.432 "trsvcid": "4420" 00:22:01.432 }, 00:22:01.432 "peer_address": { 00:22:01.432 "trtype": "TCP", 00:22:01.432 "adrfam": "IPv4", 00:22:01.432 "traddr": "10.0.0.1", 00:22:01.432 "trsvcid": "38020" 00:22:01.432 }, 00:22:01.432 "auth": { 00:22:01.432 "state": "completed", 00:22:01.432 "digest": "sha512", 00:22:01.432 "dhgroup": "ffdhe3072" 00:22:01.432 } 00:22:01.432 } 00:22:01.432 ]' 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.432 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.690 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.690 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.690 05:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.948 05:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.883 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.141 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.399 00:22:03.399 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.399 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.399 05:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.657 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.658 { 00:22:03.658 "cntlid": 121, 00:22:03.658 "qid": 0, 00:22:03.658 "state": "enabled", 00:22:03.658 "thread": "nvmf_tgt_poll_group_000", 00:22:03.658 "listen_address": { 00:22:03.658 "trtype": "TCP", 00:22:03.658 "adrfam": "IPv4", 00:22:03.658 "traddr": "10.0.0.2", 00:22:03.658 "trsvcid": "4420" 00:22:03.658 }, 00:22:03.658 "peer_address": { 00:22:03.658 "trtype": "TCP", 00:22:03.658 "adrfam": "IPv4", 00:22:03.658 "traddr": "10.0.0.1", 00:22:03.658 "trsvcid": "38180" 00:22:03.658 }, 00:22:03.658 "auth": { 00:22:03.658 "state": "completed", 00:22:03.658 "digest": "sha512", 00:22:03.658 "dhgroup": "ffdhe4096" 00:22:03.658 } 00:22:03.658 } 00:22:03.658 ]' 00:22:03.658 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.916 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.174 05:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:22:05.106 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.107 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.364 05:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.930 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.930 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.930 { 00:22:05.930 "cntlid": 123, 00:22:05.930 "qid": 0, 00:22:05.930 "state": "enabled", 00:22:05.930 "thread": "nvmf_tgt_poll_group_000", 00:22:05.930 "listen_address": { 00:22:05.930 "trtype": "TCP", 00:22:05.930 "adrfam": "IPv4", 00:22:05.930 "traddr": "10.0.0.2", 00:22:05.930 "trsvcid": "4420" 00:22:05.930 }, 00:22:05.930 "peer_address": { 00:22:05.930 "trtype": "TCP", 00:22:05.931 "adrfam": "IPv4", 00:22:05.931 "traddr": "10.0.0.1", 00:22:05.931 "trsvcid": "38208" 00:22:05.931 }, 00:22:05.931 "auth": { 00:22:05.931 "state": "completed", 00:22:05.931 "digest": "sha512", 00:22:05.931 "dhgroup": "ffdhe4096" 00:22:05.931 } 00:22:05.931 } 00:22:05.931 ]' 00:22:05.931 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.192 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.450 05:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.379 05:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.637 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.201 00:22:08.201 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.201 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.201 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.458 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.458 { 00:22:08.458 "cntlid": 125, 00:22:08.458 "qid": 0, 00:22:08.458 "state": "enabled", 00:22:08.459 "thread": "nvmf_tgt_poll_group_000", 00:22:08.459 "listen_address": { 00:22:08.459 "trtype": "TCP", 00:22:08.459 "adrfam": "IPv4", 00:22:08.459 "traddr": "10.0.0.2", 00:22:08.459 "trsvcid": "4420" 00:22:08.459 }, 00:22:08.459 "peer_address": { 00:22:08.459 "trtype": "TCP", 00:22:08.459 "adrfam": "IPv4", 00:22:08.459 "traddr": "10.0.0.1", 00:22:08.459 "trsvcid": "38242" 00:22:08.459 }, 00:22:08.459 "auth": { 00:22:08.459 "state": "completed", 00:22:08.459 "digest": "sha512", 00:22:08.459 "dhgroup": "ffdhe4096" 00:22:08.459 } 00:22:08.459 } 00:22:08.459 ]' 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.459 05:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.736 05:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.690 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.948 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.514 00:22:10.514 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.514 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.514 05:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.772 { 00:22:10.772 "cntlid": 127, 00:22:10.772 "qid": 0, 00:22:10.772 "state": "enabled", 00:22:10.772 "thread": "nvmf_tgt_poll_group_000", 00:22:10.772 "listen_address": { 00:22:10.772 "trtype": "TCP", 00:22:10.772 "adrfam": "IPv4", 00:22:10.772 "traddr": "10.0.0.2", 00:22:10.772 "trsvcid": "4420" 00:22:10.772 }, 00:22:10.772 "peer_address": { 00:22:10.772 "trtype": "TCP", 00:22:10.772 "adrfam": "IPv4", 00:22:10.772 "traddr": "10.0.0.1", 00:22:10.772 "trsvcid": "38264" 00:22:10.772 }, 00:22:10.772 "auth": { 00:22:10.772 "state": "completed", 00:22:10.772 "digest": "sha512", 00:22:10.772 "dhgroup": "ffdhe4096" 00:22:10.772 } 00:22:10.772 } 00:22:10.772 ]' 00:22:10.772 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.773 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.030 05:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.965 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.223 05:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.788 00:22:12.788 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.788 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.788 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.045 { 00:22:13.045 "cntlid": 129, 00:22:13.045 "qid": 0, 00:22:13.045 "state": "enabled", 00:22:13.045 "thread": "nvmf_tgt_poll_group_000", 00:22:13.045 "listen_address": { 00:22:13.045 "trtype": "TCP", 00:22:13.045 "adrfam": "IPv4", 00:22:13.045 "traddr": "10.0.0.2", 00:22:13.045 "trsvcid": "4420" 00:22:13.045 }, 00:22:13.045 "peer_address": { 00:22:13.045 "trtype": "TCP", 00:22:13.045 "adrfam": "IPv4", 00:22:13.045 "traddr": "10.0.0.1", 00:22:13.045 "trsvcid": "41618" 00:22:13.045 }, 00:22:13.045 "auth": { 00:22:13.045 "state": "completed", 00:22:13.045 "digest": "sha512", 00:22:13.045 "dhgroup": "ffdhe6144" 00:22:13.045 } 00:22:13.045 } 00:22:13.045 ]' 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.045 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.303 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.303 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.303 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.560 05:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.488 05:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.744 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.307 00:22:15.307 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.307 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.307 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.563 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.563 { 00:22:15.563 "cntlid": 131, 00:22:15.563 "qid": 0, 00:22:15.563 "state": "enabled", 00:22:15.563 "thread": "nvmf_tgt_poll_group_000", 00:22:15.564 "listen_address": { 00:22:15.564 "trtype": "TCP", 00:22:15.564 "adrfam": "IPv4", 00:22:15.564 "traddr": "10.0.0.2", 00:22:15.564 "trsvcid": "4420" 00:22:15.564 }, 00:22:15.564 "peer_address": { 00:22:15.564 "trtype": "TCP", 00:22:15.564 "adrfam": "IPv4", 00:22:15.564 "traddr": "10.0.0.1", 00:22:15.564 "trsvcid": "41638" 00:22:15.564 }, 00:22:15.564 "auth": { 00:22:15.564 "state": "completed", 00:22:15.564 "digest": "sha512", 00:22:15.564 "dhgroup": "ffdhe6144" 00:22:15.564 } 00:22:15.564 } 00:22:15.564 ]' 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.564 05:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.820 05:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.749 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.006 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.571 00:22:17.571 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.571 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.571 05:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.829 { 00:22:17.829 "cntlid": 133, 00:22:17.829 "qid": 0, 00:22:17.829 "state": "enabled", 00:22:17.829 "thread": "nvmf_tgt_poll_group_000", 00:22:17.829 "listen_address": { 00:22:17.829 "trtype": "TCP", 00:22:17.829 "adrfam": "IPv4", 00:22:17.829 "traddr": "10.0.0.2", 00:22:17.829 "trsvcid": "4420" 00:22:17.829 }, 00:22:17.829 "peer_address": { 00:22:17.829 "trtype": "TCP", 00:22:17.829 "adrfam": "IPv4", 00:22:17.829 "traddr": "10.0.0.1", 00:22:17.829 "trsvcid": "41668" 00:22:17.829 }, 00:22:17.829 "auth": { 00:22:17.829 "state": "completed", 00:22:17.829 "digest": "sha512", 00:22:17.829 "dhgroup": "ffdhe6144" 00:22:17.829 } 00:22:17.829 } 00:22:17.829 ]' 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.829 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.086 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.086 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.086 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.354 05:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:22:19.291 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.291 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.291 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.291 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.292 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.292 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.292 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.292 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.549 05:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.115 00:22:20.115 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.115 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.115 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.373 { 00:22:20.373 "cntlid": 135, 00:22:20.373 "qid": 0, 00:22:20.373 "state": "enabled", 00:22:20.373 "thread": "nvmf_tgt_poll_group_000", 00:22:20.373 "listen_address": { 00:22:20.373 "trtype": "TCP", 00:22:20.373 "adrfam": "IPv4", 00:22:20.373 "traddr": "10.0.0.2", 00:22:20.373 "trsvcid": "4420" 00:22:20.373 }, 00:22:20.373 "peer_address": { 00:22:20.373 "trtype": "TCP", 00:22:20.373 "adrfam": "IPv4", 00:22:20.373 "traddr": "10.0.0.1", 00:22:20.373 "trsvcid": "41688" 00:22:20.373 }, 00:22:20.373 "auth": { 00:22:20.373 "state": "completed", 00:22:20.373 "digest": "sha512", 00:22:20.373 "dhgroup": "ffdhe6144" 00:22:20.373 } 00:22:20.373 } 00:22:20.373 ]' 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.373 05:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.631 05:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.601 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.859 05:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.793 00:22:22.793 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.793 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.793 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.051 { 00:22:23.051 "cntlid": 137, 00:22:23.051 "qid": 0, 00:22:23.051 "state": "enabled", 00:22:23.051 "thread": "nvmf_tgt_poll_group_000", 00:22:23.051 "listen_address": { 00:22:23.051 "trtype": "TCP", 00:22:23.051 "adrfam": "IPv4", 00:22:23.051 "traddr": "10.0.0.2", 00:22:23.051 "trsvcid": "4420" 00:22:23.051 }, 00:22:23.051 "peer_address": { 00:22:23.051 "trtype": "TCP", 00:22:23.051 "adrfam": "IPv4", 00:22:23.051 "traddr": "10.0.0.1", 00:22:23.051 "trsvcid": "60756" 00:22:23.051 }, 00:22:23.051 "auth": { 00:22:23.051 "state": "completed", 00:22:23.051 "digest": "sha512", 00:22:23.051 "dhgroup": "ffdhe8192" 00:22:23.051 } 00:22:23.051 } 00:22:23.051 ]' 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.051 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.309 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.309 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.309 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.309 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.309 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.567 05:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.498 05:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.756 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.688 00:22:25.688 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.688 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.688 05:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.688 { 00:22:25.688 "cntlid": 139, 00:22:25.688 "qid": 0, 00:22:25.688 "state": "enabled", 00:22:25.688 "thread": "nvmf_tgt_poll_group_000", 00:22:25.688 "listen_address": { 00:22:25.688 "trtype": "TCP", 00:22:25.688 "adrfam": "IPv4", 00:22:25.688 "traddr": "10.0.0.2", 00:22:25.688 "trsvcid": "4420" 00:22:25.688 }, 00:22:25.688 "peer_address": { 00:22:25.688 "trtype": "TCP", 00:22:25.688 "adrfam": "IPv4", 00:22:25.688 "traddr": "10.0.0.1", 00:22:25.688 "trsvcid": "60778" 00:22:25.688 }, 00:22:25.688 "auth": { 00:22:25.688 "state": "completed", 00:22:25.688 "digest": "sha512", 00:22:25.688 "dhgroup": "ffdhe8192" 00:22:25.688 } 00:22:25.688 } 00:22:25.688 ]' 00:22:25.688 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.945 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.201 05:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDg5OGZiZjU2YjBmMWU0NzYyOGJkZDg2MGIyODI3YWYhEdFv: --dhchap-ctrl-secret DHHC-1:02:MjY4MzU1YTJiZTE3YmU2N2VkYjMwMmRkYzNlYjk0OTNhNTM3ZWU5YjgxYTU3MmUxSzAYhg==: 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.131 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.389 05:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.321 00:22:28.321 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.321 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.321 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.580 { 00:22:28.580 "cntlid": 141, 00:22:28.580 "qid": 0, 00:22:28.580 "state": "enabled", 00:22:28.580 "thread": "nvmf_tgt_poll_group_000", 00:22:28.580 "listen_address": { 00:22:28.580 "trtype": "TCP", 00:22:28.580 "adrfam": "IPv4", 00:22:28.580 "traddr": "10.0.0.2", 00:22:28.580 "trsvcid": "4420" 00:22:28.580 }, 00:22:28.580 "peer_address": { 00:22:28.580 "trtype": "TCP", 00:22:28.580 "adrfam": "IPv4", 00:22:28.580 "traddr": "10.0.0.1", 00:22:28.580 "trsvcid": "60796" 00:22:28.580 }, 00:22:28.580 "auth": { 00:22:28.580 "state": "completed", 00:22:28.580 "digest": "sha512", 00:22:28.580 "dhgroup": "ffdhe8192" 00:22:28.580 } 00:22:28.580 } 00:22:28.580 ]' 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.580 05:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.580 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.580 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.580 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.580 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.580 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.838 05:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWFkMTJmZTgxMGU5NTY3YWQ4ZTg3Yjc5OWMyMmI1MjA2YTRmYWM1NGE2ODk4MDJheb3/8w==: --dhchap-ctrl-secret DHHC-1:01:NWY0ODgyMWZjMWE2NWNmZmQ2ZDgyNjFlMzg0YTU2ZDgcsnP3: 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.771 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.029 05:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.962 00:22:30.962 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.962 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.962 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.220 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.220 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.220 05:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.220 05:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.221 05:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.221 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.221 { 00:22:31.221 "cntlid": 143, 00:22:31.221 "qid": 0, 00:22:31.221 "state": "enabled", 00:22:31.221 "thread": "nvmf_tgt_poll_group_000", 00:22:31.221 "listen_address": { 00:22:31.221 "trtype": "TCP", 00:22:31.221 "adrfam": "IPv4", 00:22:31.221 "traddr": "10.0.0.2", 00:22:31.221 "trsvcid": "4420" 00:22:31.221 }, 00:22:31.221 "peer_address": { 00:22:31.221 "trtype": "TCP", 00:22:31.221 "adrfam": "IPv4", 00:22:31.221 "traddr": "10.0.0.1", 00:22:31.221 "trsvcid": "60826" 00:22:31.221 }, 00:22:31.221 "auth": { 00:22:31.221 "state": "completed", 00:22:31.221 "digest": "sha512", 00:22:31.221 "dhgroup": "ffdhe8192" 00:22:31.221 } 00:22:31.221 } 00:22:31.221 ]' 00:22:31.221 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.221 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.221 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.478 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.478 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.478 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.478 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.478 05:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.736 05:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:22:32.671 05:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.671 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.929 05:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.930 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.930 05:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.863 00:22:33.863 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.863 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.863 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.121 { 00:22:34.121 "cntlid": 145, 00:22:34.121 "qid": 0, 00:22:34.121 "state": "enabled", 00:22:34.121 "thread": "nvmf_tgt_poll_group_000", 00:22:34.121 "listen_address": { 00:22:34.121 "trtype": "TCP", 00:22:34.121 "adrfam": "IPv4", 00:22:34.121 "traddr": "10.0.0.2", 00:22:34.121 "trsvcid": "4420" 00:22:34.121 }, 00:22:34.121 "peer_address": { 00:22:34.121 "trtype": "TCP", 00:22:34.121 "adrfam": "IPv4", 00:22:34.121 "traddr": "10.0.0.1", 00:22:34.121 "trsvcid": "58570" 00:22:34.121 }, 00:22:34.121 "auth": { 00:22:34.121 "state": "completed", 00:22:34.121 "digest": "sha512", 00:22:34.121 "dhgroup": "ffdhe8192" 00:22:34.121 } 00:22:34.121 } 00:22:34.121 ]' 00:22:34.121 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.122 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.122 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.122 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.122 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.385 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.385 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.385 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.664 05:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDBiYTY2ZDU5NGQ0OWZiY2M0NjEyNTU1OWU1MjVmNDg2Y2I1MzYzZjU1N2ZkMGZicfhNbA==: --dhchap-ctrl-secret DHHC-1:03:N2U3ZmUxOGFlZmMyNTBkNGFiNjViMWFiM2JkN2M5NDU3MDNiMzFkNDA4YTMwNzM4MDVkMjg4Njk3MDliZDZlY/8RXHc=: 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:35.597 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.598 05:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:36.529 request: 00:22:36.529 { 00:22:36.529 "name": "nvme0", 00:22:36.529 "trtype": "tcp", 00:22:36.529 "traddr": "10.0.0.2", 00:22:36.529 "adrfam": "ipv4", 00:22:36.529 "trsvcid": "4420", 00:22:36.529 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:36.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:36.529 "prchk_reftag": false, 00:22:36.529 "prchk_guard": false, 00:22:36.529 "hdgst": false, 00:22:36.529 "ddgst": false, 00:22:36.529 "dhchap_key": "key2", 00:22:36.529 "method": "bdev_nvme_attach_controller", 00:22:36.529 "req_id": 1 00:22:36.529 } 00:22:36.529 Got JSON-RPC error response 00:22:36.529 response: 00:22:36.529 { 00:22:36.529 "code": -5, 00:22:36.529 "message": "Input/output error" 00:22:36.529 } 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.529 05:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.093 request: 00:22:37.094 { 00:22:37.094 "name": "nvme0", 00:22:37.094 "trtype": "tcp", 00:22:37.094 "traddr": "10.0.0.2", 00:22:37.094 "adrfam": "ipv4", 00:22:37.094 "trsvcid": "4420", 00:22:37.094 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.094 "prchk_reftag": false, 00:22:37.094 "prchk_guard": false, 00:22:37.094 "hdgst": false, 00:22:37.094 "ddgst": false, 00:22:37.094 "dhchap_key": "key1", 00:22:37.094 "dhchap_ctrlr_key": "ckey2", 00:22:37.094 "method": "bdev_nvme_attach_controller", 00:22:37.094 "req_id": 1 00:22:37.094 } 00:22:37.094 Got JSON-RPC error response 00:22:37.094 response: 00:22:37.094 { 00:22:37.094 "code": -5, 00:22:37.094 "message": "Input/output error" 00:22:37.094 } 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.094 05:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.026 request: 00:22:38.026 { 00:22:38.026 "name": "nvme0", 00:22:38.026 "trtype": "tcp", 00:22:38.026 "traddr": "10.0.0.2", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "4420", 00:22:38.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.026 "prchk_reftag": false, 00:22:38.026 "prchk_guard": false, 00:22:38.026 "hdgst": false, 00:22:38.026 "ddgst": false, 00:22:38.026 "dhchap_key": "key1", 00:22:38.026 "dhchap_ctrlr_key": "ckey1", 00:22:38.026 "method": "bdev_nvme_attach_controller", 00:22:38.026 "req_id": 1 00:22:38.026 } 00:22:38.026 Got JSON-RPC error response 00:22:38.026 response: 00:22:38.026 { 00:22:38.026 "code": -5, 00:22:38.026 "message": "Input/output error" 00:22:38.026 } 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 704386 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 704386 ']' 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 704386 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 704386 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 704386' 00:22:38.026 killing process with pid 704386 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 704386 00:22:38.026 05:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 704386 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=727018 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 727018 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 727018 ']' 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.399 05:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 727018 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 727018 ']' 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.772 05:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.772 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.772 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:40.772 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:40.772 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.772 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.337 05:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.902 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.160 05:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.417 { 00:22:42.417 "cntlid": 1, 00:22:42.417 "qid": 0, 00:22:42.417 "state": "enabled", 00:22:42.417 "thread": "nvmf_tgt_poll_group_000", 00:22:42.417 "listen_address": { 00:22:42.417 "trtype": "TCP", 00:22:42.417 "adrfam": "IPv4", 00:22:42.417 "traddr": "10.0.0.2", 00:22:42.417 "trsvcid": "4420" 00:22:42.417 }, 00:22:42.417 "peer_address": { 00:22:42.417 "trtype": "TCP", 00:22:42.417 "adrfam": "IPv4", 00:22:42.417 "traddr": "10.0.0.1", 00:22:42.417 "trsvcid": "37818" 00:22:42.417 }, 00:22:42.417 "auth": { 00:22:42.417 "state": "completed", 00:22:42.417 "digest": "sha512", 00:22:42.417 "dhgroup": "ffdhe8192" 00:22:42.417 } 00:22:42.417 } 00:22:42.417 ]' 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.417 05:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.673 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MWJmZjFjOGFiZGM2YjU5MzU1ZWJiNmYxNWQ0ZTBiNmZkODFlNDNkOTA0YmI5MjBhMGRjMTRhODNlZjEzNWQxYyLyhT0=: 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:43.606 05:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.864 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.122 request: 00:22:44.122 { 00:22:44.122 "name": "nvme0", 00:22:44.122 "trtype": "tcp", 00:22:44.122 "traddr": "10.0.0.2", 00:22:44.122 "adrfam": "ipv4", 00:22:44.122 "trsvcid": "4420", 00:22:44.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.122 "prchk_reftag": false, 00:22:44.122 "prchk_guard": false, 00:22:44.122 "hdgst": false, 00:22:44.122 "ddgst": false, 00:22:44.122 "dhchap_key": "key3", 00:22:44.122 "method": "bdev_nvme_attach_controller", 00:22:44.122 "req_id": 1 00:22:44.122 } 00:22:44.122 Got JSON-RPC error response 00:22:44.122 response: 00:22:44.122 { 00:22:44.122 "code": -5, 00:22:44.122 "message": "Input/output error" 00:22:44.122 } 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:44.122 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.380 05:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.638 request: 00:22:44.638 { 00:22:44.638 "name": "nvme0", 00:22:44.638 "trtype": "tcp", 00:22:44.638 "traddr": "10.0.0.2", 00:22:44.638 "adrfam": "ipv4", 00:22:44.638 "trsvcid": "4420", 00:22:44.638 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.638 "prchk_reftag": false, 00:22:44.638 "prchk_guard": false, 00:22:44.638 "hdgst": false, 00:22:44.638 "ddgst": false, 00:22:44.638 "dhchap_key": "key3", 00:22:44.638 "method": "bdev_nvme_attach_controller", 00:22:44.638 "req_id": 1 00:22:44.638 } 00:22:44.638 Got JSON-RPC error response 00:22:44.638 response: 00:22:44.638 { 00:22:44.638 "code": -5, 00:22:44.638 "message": "Input/output error" 00:22:44.638 } 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.638 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:44.896 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.154 request: 00:22:45.154 { 00:22:45.154 "name": "nvme0", 00:22:45.154 "trtype": "tcp", 00:22:45.154 "traddr": "10.0.0.2", 00:22:45.154 "adrfam": "ipv4", 00:22:45.154 "trsvcid": "4420", 00:22:45.154 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.154 "prchk_reftag": false, 00:22:45.154 "prchk_guard": false, 00:22:45.154 "hdgst": false, 00:22:45.154 "ddgst": false, 00:22:45.154 "dhchap_key": "key0", 00:22:45.154 "dhchap_ctrlr_key": "key1", 00:22:45.154 "method": "bdev_nvme_attach_controller", 00:22:45.154 "req_id": 1 00:22:45.154 } 00:22:45.154 Got JSON-RPC error response 00:22:45.154 response: 00:22:45.154 { 00:22:45.154 "code": -5, 00:22:45.154 "message": "Input/output error" 00:22:45.154 } 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:45.154 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:45.720 00:22:45.720 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:45.720 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:45.720 05:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.720 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.720 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.720 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 704536 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 704536 ']' 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 704536 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 704536 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 704536' 00:22:45.978 killing process with pid 704536 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 704536 00:22:45.978 05:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 704536 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.519 rmmod nvme_tcp 00:22:48.519 rmmod nvme_fabrics 00:22:48.519 rmmod nvme_keyring 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 727018 ']' 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 727018 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 727018 ']' 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 727018 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727018 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727018' 00:22:48.519 killing process with pid 727018 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 727018 00:22:48.519 05:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 727018 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.908 05:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.812 05:11:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.812 05:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uk7 /tmp/spdk.key-sha256.gZQ /tmp/spdk.key-sha384.kYK /tmp/spdk.key-sha512.bSg /tmp/spdk.key-sha512.VEF /tmp/spdk.key-sha384.l3E /tmp/spdk.key-sha256.p8r '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:51.812 00:22:51.812 real 3m14.180s 00:22:51.812 user 7m28.022s 00:22:51.812 sys 0m24.661s 00:22:51.812 05:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.812 05:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.812 ************************************ 00:22:51.812 END TEST nvmf_auth_target 00:22:51.812 ************************************ 00:22:51.812 05:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:51.812 05:11:58 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:51.812 05:11:58 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:51.812 05:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:51.812 05:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.812 05:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.812 ************************************ 00:22:51.812 START TEST nvmf_bdevio_no_huge 00:22:51.812 ************************************ 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:51.812 * Looking for test storage... 00:22:51.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.812 05:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:54.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:54.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:54.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:54.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.343 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:22:54.344 00:22:54.344 --- 10.0.0.2 ping statistics --- 00:22:54.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.344 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:54.344 00:22:54.344 --- 10.0.0.1 ping statistics --- 00:22:54.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.344 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=730225 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 730225 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 730225 ']' 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.344 05:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.344 [2024-07-13 05:12:00.536244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:54.344 [2024-07-13 05:12:00.536405] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:54.344 [2024-07-13 05:12:00.693880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.626 [2024-07-13 05:12:00.937343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.626 [2024-07-13 05:12:00.937414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.626 [2024-07-13 05:12:00.937438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.626 [2024-07-13 05:12:00.937456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.626 [2024-07-13 05:12:00.937475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.626 [2024-07-13 05:12:00.937588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.626 [2024-07-13 05:12:00.939902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:54.626 [2024-07-13 05:12:00.939971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.626 [2024-07-13 05:12:00.939979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.189 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 [2024-07-13 05:12:01.503145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 Malloc0 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 [2024-07-13 05:12:01.592500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.190 { 00:22:55.190 "params": { 00:22:55.190 "name": "Nvme$subsystem", 00:22:55.190 "trtype": "$TEST_TRANSPORT", 00:22:55.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.190 "adrfam": "ipv4", 00:22:55.190 "trsvcid": "$NVMF_PORT", 00:22:55.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.190 "hdgst": ${hdgst:-false}, 00:22:55.190 "ddgst": ${ddgst:-false} 00:22:55.190 }, 00:22:55.190 "method": "bdev_nvme_attach_controller" 00:22:55.190 } 00:22:55.190 EOF 00:22:55.190 )") 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:55.190 05:12:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:55.190 "params": { 00:22:55.190 "name": "Nvme1", 00:22:55.190 "trtype": "tcp", 00:22:55.190 "traddr": "10.0.0.2", 00:22:55.190 "adrfam": "ipv4", 00:22:55.190 "trsvcid": "4420", 00:22:55.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.190 "hdgst": false, 00:22:55.190 "ddgst": false 00:22:55.190 }, 00:22:55.190 "method": "bdev_nvme_attach_controller" 00:22:55.190 }' 00:22:55.190 [2024-07-13 05:12:01.676666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:55.190 [2024-07-13 05:12:01.676823] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid730432 ] 00:22:55.446 [2024-07-13 05:12:01.823422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.703 [2024-07-13 05:12:02.082072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.703 [2024-07-13 05:12:02.082116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.703 [2024-07-13 05:12:02.082122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.266 I/O targets: 00:22:56.266 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:56.266 00:22:56.266 00:22:56.266 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.266 http://cunit.sourceforge.net/ 00:22:56.266 00:22:56.266 00:22:56.266 Suite: bdevio tests on: Nvme1n1 00:22:56.266 Test: blockdev write read block ...passed 00:22:56.266 Test: blockdev write zeroes read block ...passed 00:22:56.266 Test: blockdev write zeroes read no split ...passed 00:22:56.266 Test: blockdev write zeroes read split ...passed 00:22:56.266 Test: blockdev write zeroes read split partial ...passed 00:22:56.266 Test: blockdev reset ...[2024-07-13 05:12:02.716021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.266 [2024-07-13 05:12:02.716228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:56.523 [2024-07-13 05:12:02.867106] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.523 passed 00:22:56.523 Test: blockdev write read 8 blocks ...passed 00:22:56.523 Test: blockdev write read size > 128k ...passed 00:22:56.523 Test: blockdev write read invalid size ...passed 00:22:56.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:56.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:56.523 Test: blockdev write read max offset ...passed 00:22:56.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:56.779 Test: blockdev writev readv 8 blocks ...passed 00:22:56.779 Test: blockdev writev readv 30 x 1block ...passed 00:22:56.779 Test: blockdev writev readv block ...passed 00:22:56.779 Test: blockdev writev readv size > 128k ...passed 00:22:56.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:56.779 Test: blockdev comparev and writev ...[2024-07-13 05:12:03.086850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.086948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.086989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.087017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.087531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.087566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.087601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.087636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.088147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.088182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.088216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.088252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.088720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.088755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:56.779 [2024-07-13 05:12:03.088832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:56.779 passed 00:22:56.779 Test: blockdev nvme passthru rw ...passed 00:22:56.779 Test: blockdev nvme passthru vendor specific ...[2024-07-13 05:12:03.171365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.779 [2024-07-13 05:12:03.171424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:56.779 [2024-07-13 05:12:03.171686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.779 [2024-07-13 05:12:03.171720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:56.780 [2024-07-13 05:12:03.171965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.780 [2024-07-13 05:12:03.171999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:56.780 [2024-07-13 05:12:03.172249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.780 [2024-07-13 05:12:03.172282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:56.780 passed 00:22:56.780 Test: blockdev nvme admin passthru ...passed 00:22:56.780 Test: blockdev copy ...passed 00:22:56.780 00:22:56.780 Run Summary: Type Total Ran Passed Failed Inactive 00:22:56.780 suites 1 1 n/a 0 0 00:22:56.780 tests 23 23 23 0 0 00:22:56.780 asserts 152 152 152 0 n/a 00:22:56.780 00:22:56.780 Elapsed time = 1.515 seconds 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.710 05:12:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.710 rmmod nvme_tcp 00:22:57.710 rmmod nvme_fabrics 00:22:57.710 rmmod nvme_keyring 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 730225 ']' 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 730225 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 730225 ']' 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 730225 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730225 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730225' 00:22:57.710 killing process with pid 730225 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 730225 00:22:57.710 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 730225 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.644 05:12:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.549 05:12:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.549 00:23:00.549 real 0m8.736s 00:23:00.549 user 0m19.826s 00:23:00.549 sys 0m2.863s 00:23:00.549 05:12:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.549 05:12:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.549 ************************************ 00:23:00.549 END TEST nvmf_bdevio_no_huge 00:23:00.549 ************************************ 00:23:00.549 05:12:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:00.549 05:12:06 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:00.549 05:12:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:00.549 05:12:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.549 05:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.549 ************************************ 00:23:00.549 START TEST nvmf_tls 00:23:00.549 ************************************ 00:23:00.549 05:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:00.808 * Looking for test storage... 00:23:00.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.808 05:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:02.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:02.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:02.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:02.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.707 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.708 05:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:02.708 00:23:02.708 --- 10.0.0.2 ping statistics --- 00:23:02.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.708 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:02.708 00:23:02.708 --- 10.0.0.1 ping statistics --- 00:23:02.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.708 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=733138 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 733138 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 733138 ']' 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.708 05:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.708 [2024-07-13 05:12:09.161250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:02.708 [2024-07-13 05:12:09.161387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.966 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.966 [2024-07-13 05:12:09.309346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.223 [2024-07-13 05:12:09.554381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.223 [2024-07-13 05:12:09.554451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.223 [2024-07-13 05:12:09.554485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.223 [2024-07-13 05:12:09.554511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.223 [2024-07-13 05:12:09.554532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.223 [2024-07-13 05:12:09.554577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:03.790 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:04.047 true 00:23:04.047 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.047 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:04.304 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:04.304 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:04.304 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:04.562 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.562 05:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:04.820 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:04.820 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:04.820 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:05.078 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.078 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:05.335 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:05.335 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:05.335 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.335 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:05.594 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:05.594 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:05.594 05:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:05.854 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.854 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:06.113 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:06.113 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:06.113 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:06.371 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.371 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:06.628 05:12:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3YRBYwMnVm 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.LwggZ5Ag4c 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3YRBYwMnVm 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.LwggZ5Ag4c 00:23:06.628 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:06.885 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:07.452 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3YRBYwMnVm 00:23:07.452 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3YRBYwMnVm 00:23:07.452 05:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.710 [2024-07-13 05:12:14.200236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.970 05:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.230 05:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.230 [2024-07-13 05:12:14.693553] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.230 [2024-07-13 05:12:14.693857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.230 05:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.797 malloc0 00:23:08.797 05:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.797 05:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3YRBYwMnVm 00:23:09.056 [2024-07-13 05:12:15.465026] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.056 05:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3YRBYwMnVm 00:23:09.315 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.300 Initializing NVMe Controllers 00:23:19.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.301 Initialization complete. Launching workers. 00:23:19.301 ======================================================== 00:23:19.301 Latency(us) 00:23:19.301 Device Information : IOPS MiB/s Average min max 00:23:19.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5509.49 21.52 11621.69 2288.56 19702.09 00:23:19.301 ======================================================== 00:23:19.301 Total : 5509.49 21.52 11621.69 2288.56 19702.09 00:23:19.301 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3YRBYwMnVm 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3YRBYwMnVm' 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=735186 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 735186 /var/tmp/bdevperf.sock 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 735186 ']' 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.301 05:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.301 [2024-07-13 05:12:25.793481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:19.301 [2024-07-13 05:12:25.793626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735186 ] 00:23:19.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.560 [2024-07-13 05:12:25.922591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.817 [2024-07-13 05:12:26.152808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.384 05:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.384 05:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.384 05:12:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3YRBYwMnVm 00:23:20.642 [2024-07-13 05:12:27.014974] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.642 [2024-07-13 05:12:27.015231] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.642 TLSTESTn1 00:23:20.642 05:12:27 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.901 Running I/O for 10 seconds... 00:23:30.874 00:23:30.874 Latency(us) 00:23:30.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.874 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.874 Verification LBA range: start 0x0 length 0x2000 00:23:30.874 TLSTESTn1 : 10.03 2465.15 9.63 0.00 0.00 51813.56 15049.01 71070.15 00:23:30.874 =================================================================================================================== 00:23:30.874 Total : 2465.15 9.63 0.00 0.00 51813.56 15049.01 71070.15 00:23:30.874 0 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 735186 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 735186 ']' 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 735186 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735186 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735186' 00:23:30.874 killing process with pid 735186 00:23:30.874 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 735186 00:23:30.874 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.874 00:23:30.874 Latency(us) 00:23:30.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.875 =================================================================================================================== 00:23:30.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.875 [2024-07-13 05:12:37.331411] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:30.875 05:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 735186 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LwggZ5Ag4c 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LwggZ5Ag4c 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LwggZ5Ag4c 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LwggZ5Ag4c' 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=736634 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 736634 /var/tmp/bdevperf.sock 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 736634 ']' 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.805 05:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.106 [2024-07-13 05:12:38.367050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:32.106 [2024-07-13 05:12:38.367197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736634 ] 00:23:32.106 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.106 [2024-07-13 05:12:38.490303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.365 [2024-07-13 05:12:38.712908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.929 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.929 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:32.929 05:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LwggZ5Ag4c 00:23:33.185 [2024-07-13 05:12:39.564409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.185 [2024-07-13 05:12:39.564635] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:33.185 [2024-07-13 05:12:39.579273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.185 [2024-07-13 05:12:39.580081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:33.185 [2024-07-13 05:12:39.581054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:33.185 [2024-07-13 05:12:39.582046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.185 [2024-07-13 05:12:39.582087] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.185 [2024-07-13 05:12:39.582117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.185 request: 00:23:33.185 { 00:23:33.185 "name": "TLSTEST", 00:23:33.185 "trtype": "tcp", 00:23:33.185 "traddr": "10.0.0.2", 00:23:33.185 "adrfam": "ipv4", 00:23:33.185 "trsvcid": "4420", 00:23:33.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.185 "prchk_reftag": false, 00:23:33.185 "prchk_guard": false, 00:23:33.185 "hdgst": false, 00:23:33.185 "ddgst": false, 00:23:33.185 "psk": "/tmp/tmp.LwggZ5Ag4c", 00:23:33.185 "method": "bdev_nvme_attach_controller", 00:23:33.185 "req_id": 1 00:23:33.185 } 00:23:33.185 Got JSON-RPC error response 00:23:33.185 response: 00:23:33.185 { 00:23:33.185 "code": -5, 00:23:33.185 "message": "Input/output error" 00:23:33.185 } 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 736634 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 736634 ']' 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 736634 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 736634 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 736634' 00:23:33.185 killing process with pid 736634 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 736634 00:23:33.185 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.185 00:23:33.185 Latency(us) 00:23:33.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.185 =================================================================================================================== 00:23:33.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.185 05:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 736634 00:23:33.186 [2024-07-13 05:12:39.629911] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3YRBYwMnVm 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3YRBYwMnVm 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3YRBYwMnVm 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3YRBYwMnVm' 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=736909 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 736909 /var/tmp/bdevperf.sock 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 736909 ']' 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.119 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.120 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.120 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.120 05:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.377 [2024-07-13 05:12:40.676811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:34.377 [2024-07-13 05:12:40.676980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736909 ] 00:23:34.377 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.377 [2024-07-13 05:12:40.800218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.636 [2024-07-13 05:12:41.033845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.202 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.202 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:35.202 05:12:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3YRBYwMnVm 00:23:35.460 [2024-07-13 05:12:41.912615] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.460 [2024-07-13 05:12:41.912828] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:35.460 [2024-07-13 05:12:41.924944] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:35.460 [2024-07-13 05:12:41.925006] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:35.460 [2024-07-13 05:12:41.925084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.460 [2024-07-13 05:12:41.925961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:35.460 [2024-07-13 05:12:41.926945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:35.460 [2024-07-13 05:12:41.927936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.460 [2024-07-13 05:12:41.927977] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.460 [2024-07-13 05:12:41.928006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.460 request: 00:23:35.460 { 00:23:35.460 "name": "TLSTEST", 00:23:35.460 "trtype": "tcp", 00:23:35.460 "traddr": "10.0.0.2", 00:23:35.460 "adrfam": "ipv4", 00:23:35.460 "trsvcid": "4420", 00:23:35.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.460 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.460 "prchk_reftag": false, 00:23:35.460 "prchk_guard": false, 00:23:35.460 "hdgst": false, 00:23:35.460 "ddgst": false, 00:23:35.460 "psk": "/tmp/tmp.3YRBYwMnVm", 00:23:35.460 "method": "bdev_nvme_attach_controller", 00:23:35.460 "req_id": 1 00:23:35.460 } 00:23:35.460 Got JSON-RPC error response 00:23:35.460 response: 00:23:35.460 { 00:23:35.460 "code": -5, 00:23:35.460 "message": "Input/output error" 00:23:35.460 } 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 736909 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 736909 ']' 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 736909 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.460 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 736909 00:23:35.720 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:35.720 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:35.720 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 736909' 00:23:35.720 killing process with pid 736909 00:23:35.720 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 736909 00:23:35.720 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.720 00:23:35.720 Latency(us) 00:23:35.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.720 =================================================================================================================== 00:23:35.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.720 [2024-07-13 05:12:41.973296] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.720 05:12:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 736909 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3YRBYwMnVm 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3YRBYwMnVm 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3YRBYwMnVm 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3YRBYwMnVm' 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=737185 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 737185 /var/tmp/bdevperf.sock 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 737185 ']' 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.655 05:12:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.655 [2024-07-13 05:12:43.009707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:36.655 [2024-07-13 05:12:43.009858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737185 ] 00:23:36.655 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.655 [2024-07-13 05:12:43.134549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.913 [2024-07-13 05:12:43.365682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.478 05:12:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.478 05:12:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:37.478 05:12:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3YRBYwMnVm 00:23:38.045 [2024-07-13 05:12:44.245595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.045 [2024-07-13 05:12:44.245803] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:38.045 [2024-07-13 05:12:44.255799] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:38.045 [2024-07-13 05:12:44.255844] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:38.045 [2024-07-13 05:12:44.255933] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:38.045 [2024-07-13 05:12:44.256810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:38.045 [2024-07-13 05:12:44.257783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:38.045 [2024-07-13 05:12:44.258777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:38.045 [2024-07-13 05:12:44.258826] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:38.045 [2024-07-13 05:12:44.258855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:38.045 request: 00:23:38.045 { 00:23:38.045 "name": "TLSTEST", 00:23:38.045 "trtype": "tcp", 00:23:38.045 "traddr": "10.0.0.2", 00:23:38.045 "adrfam": "ipv4", 00:23:38.045 "trsvcid": "4420", 00:23:38.045 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.045 "prchk_reftag": false, 00:23:38.045 "prchk_guard": false, 00:23:38.045 "hdgst": false, 00:23:38.045 "ddgst": false, 00:23:38.045 "psk": "/tmp/tmp.3YRBYwMnVm", 00:23:38.045 "method": "bdev_nvme_attach_controller", 00:23:38.045 "req_id": 1 00:23:38.045 } 00:23:38.045 Got JSON-RPC error response 00:23:38.045 response: 00:23:38.045 { 00:23:38.045 "code": -5, 00:23:38.045 "message": "Input/output error" 00:23:38.045 } 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 737185 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 737185 ']' 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 737185 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737185 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737185' 00:23:38.045 killing process with pid 737185 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 737185 00:23:38.045 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.045 00:23:38.045 Latency(us) 00:23:38.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.045 =================================================================================================================== 00:23:38.045 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.045 [2024-07-13 05:12:44.308084] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:38.045 05:12:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 737185 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=737456 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 737456 /var/tmp/bdevperf.sock 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 737456 ']' 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.981 05:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.981 [2024-07-13 05:12:45.359844] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:38.981 [2024-07-13 05:12:45.360022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737456 ] 00:23:38.981 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.241 [2024-07-13 05:12:45.487729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.241 [2024-07-13 05:12:45.710069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:40.178 [2024-07-13 05:12:46.587419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.178 [2024-07-13 05:12:46.589007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:23:40.178 [2024-07-13 05:12:46.590001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:40.178 [2024-07-13 05:12:46.590033] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:40.178 [2024-07-13 05:12:46.590064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.178 request: 00:23:40.178 { 00:23:40.178 "name": "TLSTEST", 00:23:40.178 "trtype": "tcp", 00:23:40.178 "traddr": "10.0.0.2", 00:23:40.178 "adrfam": "ipv4", 00:23:40.178 "trsvcid": "4420", 00:23:40.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.178 "prchk_reftag": false, 00:23:40.178 "prchk_guard": false, 00:23:40.178 "hdgst": false, 00:23:40.178 "ddgst": false, 00:23:40.178 "method": "bdev_nvme_attach_controller", 00:23:40.178 "req_id": 1 00:23:40.178 } 00:23:40.178 Got JSON-RPC error response 00:23:40.178 response: 00:23:40.178 { 00:23:40.178 "code": -5, 00:23:40.178 "message": "Input/output error" 00:23:40.178 } 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 737456 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 737456 ']' 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 737456 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737456 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737456' 00:23:40.178 killing process with pid 737456 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 737456 00:23:40.178 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.178 00:23:40.178 Latency(us) 00:23:40.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.178 =================================================================================================================== 00:23:40.178 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.178 05:12:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 737456 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 733138 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 733138 ']' 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 733138 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 733138 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 733138' 00:23:41.117 killing process with pid 733138 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 733138 00:23:41.117 [2024-07-13 05:12:47.582003] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:41.117 05:12:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 733138 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:42.505 05:12:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.DRLi7OOOm4 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.DRLi7OOOm4 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=737881 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 737881 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 737881 ']' 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.764 05:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.764 [2024-07-13 05:12:49.131989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:42.764 [2024-07-13 05:12:49.132124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.764 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.023 [2024-07-13 05:12:49.270763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.284 [2024-07-13 05:12:49.527992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.284 [2024-07-13 05:12:49.528076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.284 [2024-07-13 05:12:49.528108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.284 [2024-07-13 05:12:49.528133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.284 [2024-07-13 05:12:49.528161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.284 [2024-07-13 05:12:49.528215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DRLi7OOOm4 00:23:43.852 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:44.111 [2024-07-13 05:12:50.367178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.111 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.368 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.626 [2024-07-13 05:12:50.956800] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.626 [2024-07-13 05:12:50.957148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.626 05:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.884 malloc0 00:23:44.884 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:45.142 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:23:45.400 [2024-07-13 05:12:51.780072] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRLi7OOOm4 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DRLi7OOOm4' 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=738289 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 738289 /var/tmp/bdevperf.sock 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738289 ']' 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.400 05:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.400 [2024-07-13 05:12:51.870558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:45.400 [2024-07-13 05:12:51.870711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738289 ] 00:23:45.658 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.658 [2024-07-13 05:12:51.995100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.916 [2024-07-13 05:12:52.227052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.481 05:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.481 05:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.481 05:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:23:46.739 [2024-07-13 05:12:53.141322] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.739 [2024-07-13 05:12:53.141532] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.739 TLSTESTn1 00:23:46.997 05:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:46.997 Running I/O for 10 seconds... 00:23:56.961 00:23:56.961 Latency(us) 00:23:56.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.961 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:56.961 Verification LBA range: start 0x0 length 0x2000 00:23:56.961 TLSTESTn1 : 10.03 2420.46 9.45 0.00 0.00 52773.17 12621.75 83886.08 00:23:56.961 =================================================================================================================== 00:23:56.961 Total : 2420.46 9.45 0.00 0.00 52773.17 12621.75 83886.08 00:23:56.961 0 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 738289 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738289 ']' 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738289 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.961 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738289 00:23:57.218 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:57.218 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:57.218 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738289' 00:23:57.218 killing process with pid 738289 00:23:57.218 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738289 00:23:57.218 Received shutdown signal, test time was about 10.000000 seconds 00:23:57.218 00:23:57.218 Latency(us) 00:23:57.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.218 =================================================================================================================== 00:23:57.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.218 [2024-07-13 05:13:03.468707] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:57.218 05:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738289 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.DRLi7OOOm4 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRLi7OOOm4 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRLi7OOOm4 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRLi7OOOm4 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DRLi7OOOm4' 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=739737 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 739737 /var/tmp/bdevperf.sock 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 739737 ']' 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.153 05:13:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.153 [2024-07-13 05:13:04.527504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:58.153 [2024-07-13 05:13:04.527653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid739737 ] 00:23:58.153 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.153 [2024-07-13 05:13:04.649524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.411 [2024-07-13 05:13:04.872566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:23:59.347 [2024-07-13 05:13:05.729854] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.347 [2024-07-13 05:13:05.729966] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:59.347 [2024-07-13 05:13:05.729996] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.DRLi7OOOm4 00:23:59.347 request: 00:23:59.347 { 00:23:59.347 "name": "TLSTEST", 00:23:59.347 "trtype": "tcp", 00:23:59.347 "traddr": "10.0.0.2", 00:23:59.347 "adrfam": "ipv4", 00:23:59.347 "trsvcid": "4420", 00:23:59.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.347 "prchk_reftag": false, 00:23:59.347 "prchk_guard": false, 00:23:59.347 "hdgst": false, 00:23:59.347 "ddgst": false, 00:23:59.347 "psk": "/tmp/tmp.DRLi7OOOm4", 00:23:59.347 "method": "bdev_nvme_attach_controller", 00:23:59.347 "req_id": 1 00:23:59.347 } 00:23:59.347 Got JSON-RPC error response 00:23:59.347 response: 00:23:59.347 { 00:23:59.347 "code": -1, 00:23:59.347 "message": "Operation not permitted" 00:23:59.347 } 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 739737 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 739737 ']' 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 739737 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 739737 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 739737' 00:23:59.347 killing process with pid 739737 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 739737 00:23:59.347 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.347 00:23:59.347 Latency(us) 00:23:59.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.347 =================================================================================================================== 00:23:59.347 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.347 05:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 739737 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 737881 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 737881 ']' 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 737881 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737881 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737881' 00:24:00.281 killing process with pid 737881 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 737881 00:24:00.281 [2024-07-13 05:13:06.739364] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:00.281 05:13:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 737881 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=740155 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 740155 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 740155 ']' 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.187 05:13:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.187 [2024-07-13 05:13:08.303578] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:02.187 [2024-07-13 05:13:08.303726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.187 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.187 [2024-07-13 05:13:08.443584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.445 [2024-07-13 05:13:08.699335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.445 [2024-07-13 05:13:08.699425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.445 [2024-07-13 05:13:08.699454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.445 [2024-07-13 05:13:08.699480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.445 [2024-07-13 05:13:08.699501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.445 [2024-07-13 05:13:08.699555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DRLi7OOOm4 00:24:03.012 05:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:03.270 [2024-07-13 05:13:09.542223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.270 05:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:03.528 05:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:03.787 [2024-07-13 05:13:10.047708] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.787 [2024-07-13 05:13:10.048056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.787 05:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:04.045 malloc0 00:24:04.045 05:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:04.304 05:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:24:04.563 [2024-07-13 05:13:10.962076] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:04.563 [2024-07-13 05:13:10.962138] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:04.563 [2024-07-13 05:13:10.962183] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:04.563 request: 00:24:04.563 { 00:24:04.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.563 "host": "nqn.2016-06.io.spdk:host1", 00:24:04.563 "psk": "/tmp/tmp.DRLi7OOOm4", 00:24:04.563 "method": "nvmf_subsystem_add_host", 00:24:04.563 "req_id": 1 00:24:04.563 } 00:24:04.563 Got JSON-RPC error response 00:24:04.563 response: 00:24:04.563 { 00:24:04.563 "code": -32603, 00:24:04.563 "message": "Internal error" 00:24:04.563 } 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 740155 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 740155 ']' 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 740155 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.563 05:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740155 00:24:04.563 05:13:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.563 05:13:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.563 05:13:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740155' 00:24:04.563 killing process with pid 740155 00:24:04.563 05:13:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 740155 00:24:04.563 05:13:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 740155 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.DRLi7OOOm4 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=740708 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 740708 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 740708 ']' 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.961 05:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.219 [2024-07-13 05:13:12.538781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:06.219 [2024-07-13 05:13:12.538954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.219 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.219 [2024-07-13 05:13:12.679463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.476 [2024-07-13 05:13:12.935009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.476 [2024-07-13 05:13:12.935077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.476 [2024-07-13 05:13:12.935108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.476 [2024-07-13 05:13:12.935134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.476 [2024-07-13 05:13:12.935157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.476 [2024-07-13 05:13:12.935202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:24:07.042 05:13:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DRLi7OOOm4 00:24:07.043 05:13:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.300 [2024-07-13 05:13:13.674086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.300 05:13:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:07.562 05:13:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:07.821 [2024-07-13 05:13:14.163433] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.821 [2024-07-13 05:13:14.163739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.821 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:08.079 malloc0 00:24:08.079 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:08.335 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:24:08.593 [2024-07-13 05:13:14.929814] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=741004 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 741004 /var/tmp/bdevperf.sock 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741004 ']' 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.593 05:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.593 [2024-07-13 05:13:15.026183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:08.593 [2024-07-13 05:13:15.026328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741004 ] 00:24:08.851 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.851 [2024-07-13 05:13:15.151978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.109 [2024-07-13 05:13:15.387790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.675 05:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.675 05:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:09.675 05:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:24:09.934 [2024-07-13 05:13:16.268463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.934 [2024-07-13 05:13:16.268669] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:09.934 TLSTESTn1 00:24:09.934 05:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:10.499 05:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:10.500 "subsystems": [ 00:24:10.500 { 00:24:10.500 "subsystem": "keyring", 00:24:10.500 "config": [] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "iobuf", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "iobuf_set_options", 00:24:10.500 "params": { 00:24:10.500 "small_pool_count": 8192, 00:24:10.500 "large_pool_count": 1024, 00:24:10.500 "small_bufsize": 8192, 00:24:10.500 "large_bufsize": 135168 00:24:10.500 } 00:24:10.500 } 00:24:10.500 ] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "sock", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "sock_set_default_impl", 00:24:10.500 "params": { 00:24:10.500 "impl_name": "posix" 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "sock_impl_set_options", 00:24:10.500 "params": { 00:24:10.500 "impl_name": "ssl", 00:24:10.500 "recv_buf_size": 4096, 00:24:10.500 "send_buf_size": 4096, 00:24:10.500 "enable_recv_pipe": true, 00:24:10.500 "enable_quickack": false, 00:24:10.500 "enable_placement_id": 0, 00:24:10.500 "enable_zerocopy_send_server": true, 00:24:10.500 "enable_zerocopy_send_client": false, 00:24:10.500 "zerocopy_threshold": 0, 00:24:10.500 "tls_version": 0, 00:24:10.500 "enable_ktls": false 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "sock_impl_set_options", 00:24:10.500 "params": { 00:24:10.500 "impl_name": "posix", 00:24:10.500 "recv_buf_size": 2097152, 00:24:10.500 "send_buf_size": 2097152, 00:24:10.500 "enable_recv_pipe": true, 00:24:10.500 "enable_quickack": false, 00:24:10.500 "enable_placement_id": 0, 00:24:10.500 "enable_zerocopy_send_server": true, 00:24:10.500 "enable_zerocopy_send_client": false, 00:24:10.500 "zerocopy_threshold": 0, 00:24:10.500 "tls_version": 0, 00:24:10.500 "enable_ktls": false 00:24:10.500 } 00:24:10.500 } 00:24:10.500 ] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "vmd", 00:24:10.500 "config": [] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "accel", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "accel_set_options", 00:24:10.500 "params": { 00:24:10.500 "small_cache_size": 128, 00:24:10.500 "large_cache_size": 16, 00:24:10.500 "task_count": 2048, 00:24:10.500 "sequence_count": 2048, 00:24:10.500 "buf_count": 2048 00:24:10.500 } 00:24:10.500 } 00:24:10.500 ] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "bdev", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "bdev_set_options", 00:24:10.500 "params": { 00:24:10.500 "bdev_io_pool_size": 65535, 00:24:10.500 "bdev_io_cache_size": 256, 00:24:10.500 "bdev_auto_examine": true, 00:24:10.500 "iobuf_small_cache_size": 128, 00:24:10.500 "iobuf_large_cache_size": 16 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_raid_set_options", 00:24:10.500 "params": { 00:24:10.500 "process_window_size_kb": 1024 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_iscsi_set_options", 00:24:10.500 "params": { 00:24:10.500 "timeout_sec": 30 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_nvme_set_options", 00:24:10.500 "params": { 00:24:10.500 "action_on_timeout": "none", 00:24:10.500 "timeout_us": 0, 00:24:10.500 "timeout_admin_us": 0, 00:24:10.500 "keep_alive_timeout_ms": 10000, 00:24:10.500 "arbitration_burst": 0, 00:24:10.500 "low_priority_weight": 0, 00:24:10.500 "medium_priority_weight": 0, 00:24:10.500 "high_priority_weight": 0, 00:24:10.500 "nvme_adminq_poll_period_us": 10000, 00:24:10.500 "nvme_ioq_poll_period_us": 0, 00:24:10.500 "io_queue_requests": 0, 00:24:10.500 "delay_cmd_submit": true, 00:24:10.500 "transport_retry_count": 4, 00:24:10.500 "bdev_retry_count": 3, 00:24:10.500 "transport_ack_timeout": 0, 00:24:10.500 "ctrlr_loss_timeout_sec": 0, 00:24:10.500 "reconnect_delay_sec": 0, 00:24:10.500 "fast_io_fail_timeout_sec": 0, 00:24:10.500 "disable_auto_failback": false, 00:24:10.500 "generate_uuids": false, 00:24:10.500 "transport_tos": 0, 00:24:10.500 "nvme_error_stat": false, 00:24:10.500 "rdma_srq_size": 0, 00:24:10.500 "io_path_stat": false, 00:24:10.500 "allow_accel_sequence": false, 00:24:10.500 "rdma_max_cq_size": 0, 00:24:10.500 "rdma_cm_event_timeout_ms": 0, 00:24:10.500 "dhchap_digests": [ 00:24:10.500 "sha256", 00:24:10.500 "sha384", 00:24:10.500 "sha512" 00:24:10.500 ], 00:24:10.500 "dhchap_dhgroups": [ 00:24:10.500 "null", 00:24:10.500 "ffdhe2048", 00:24:10.500 "ffdhe3072", 00:24:10.500 "ffdhe4096", 00:24:10.500 "ffdhe6144", 00:24:10.500 "ffdhe8192" 00:24:10.500 ] 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_nvme_set_hotplug", 00:24:10.500 "params": { 00:24:10.500 "period_us": 100000, 00:24:10.500 "enable": false 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_malloc_create", 00:24:10.500 "params": { 00:24:10.500 "name": "malloc0", 00:24:10.500 "num_blocks": 8192, 00:24:10.500 "block_size": 4096, 00:24:10.500 "physical_block_size": 4096, 00:24:10.500 "uuid": "32d1ec0f-68b3-4de2-823b-7fdefb2c3778", 00:24:10.500 "optimal_io_boundary": 0 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "bdev_wait_for_examine" 00:24:10.500 } 00:24:10.500 ] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "nbd", 00:24:10.500 "config": [] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "scheduler", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "framework_set_scheduler", 00:24:10.500 "params": { 00:24:10.500 "name": "static" 00:24:10.500 } 00:24:10.500 } 00:24:10.500 ] 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "subsystem": "nvmf", 00:24:10.500 "config": [ 00:24:10.500 { 00:24:10.500 "method": "nvmf_set_config", 00:24:10.500 "params": { 00:24:10.500 "discovery_filter": "match_any", 00:24:10.500 "admin_cmd_passthru": { 00:24:10.500 "identify_ctrlr": false 00:24:10.500 } 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "nvmf_set_max_subsystems", 00:24:10.500 "params": { 00:24:10.500 "max_subsystems": 1024 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "nvmf_set_crdt", 00:24:10.500 "params": { 00:24:10.500 "crdt1": 0, 00:24:10.500 "crdt2": 0, 00:24:10.500 "crdt3": 0 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "nvmf_create_transport", 00:24:10.500 "params": { 00:24:10.500 "trtype": "TCP", 00:24:10.500 "max_queue_depth": 128, 00:24:10.500 "max_io_qpairs_per_ctrlr": 127, 00:24:10.500 "in_capsule_data_size": 4096, 00:24:10.500 "max_io_size": 131072, 00:24:10.500 "io_unit_size": 131072, 00:24:10.500 "max_aq_depth": 128, 00:24:10.500 "num_shared_buffers": 511, 00:24:10.500 "buf_cache_size": 4294967295, 00:24:10.500 "dif_insert_or_strip": false, 00:24:10.500 "zcopy": false, 00:24:10.500 "c2h_success": false, 00:24:10.500 "sock_priority": 0, 00:24:10.500 "abort_timeout_sec": 1, 00:24:10.500 "ack_timeout": 0, 00:24:10.500 "data_wr_pool_size": 0 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "nvmf_create_subsystem", 00:24:10.500 "params": { 00:24:10.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.500 "allow_any_host": false, 00:24:10.500 "serial_number": "SPDK00000000000001", 00:24:10.500 "model_number": "SPDK bdev Controller", 00:24:10.500 "max_namespaces": 10, 00:24:10.500 "min_cntlid": 1, 00:24:10.500 "max_cntlid": 65519, 00:24:10.500 "ana_reporting": false 00:24:10.500 } 00:24:10.500 }, 00:24:10.500 { 00:24:10.500 "method": "nvmf_subsystem_add_host", 00:24:10.500 "params": { 00:24:10.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.501 "host": "nqn.2016-06.io.spdk:host1", 00:24:10.501 "psk": "/tmp/tmp.DRLi7OOOm4" 00:24:10.501 } 00:24:10.501 }, 00:24:10.501 { 00:24:10.501 "method": "nvmf_subsystem_add_ns", 00:24:10.501 "params": { 00:24:10.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.501 "namespace": { 00:24:10.501 "nsid": 1, 00:24:10.501 "bdev_name": "malloc0", 00:24:10.501 "nguid": "32D1EC0F68B34DE2823B7FDEFB2C3778", 00:24:10.501 "uuid": "32d1ec0f-68b3-4de2-823b-7fdefb2c3778", 00:24:10.501 "no_auto_visible": false 00:24:10.501 } 00:24:10.501 } 00:24:10.501 }, 00:24:10.501 { 00:24:10.501 "method": "nvmf_subsystem_add_listener", 00:24:10.501 "params": { 00:24:10.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.501 "listen_address": { 00:24:10.501 "trtype": "TCP", 00:24:10.501 "adrfam": "IPv4", 00:24:10.501 "traddr": "10.0.0.2", 00:24:10.501 "trsvcid": "4420" 00:24:10.501 }, 00:24:10.501 "secure_channel": true 00:24:10.501 } 00:24:10.501 } 00:24:10.501 ] 00:24:10.501 } 00:24:10.501 ] 00:24:10.501 }' 00:24:10.501 05:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:10.759 05:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:10.759 "subsystems": [ 00:24:10.759 { 00:24:10.759 "subsystem": "keyring", 00:24:10.759 "config": [] 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "subsystem": "iobuf", 00:24:10.759 "config": [ 00:24:10.759 { 00:24:10.759 "method": "iobuf_set_options", 00:24:10.759 "params": { 00:24:10.759 "small_pool_count": 8192, 00:24:10.759 "large_pool_count": 1024, 00:24:10.759 "small_bufsize": 8192, 00:24:10.759 "large_bufsize": 135168 00:24:10.759 } 00:24:10.759 } 00:24:10.759 ] 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "subsystem": "sock", 00:24:10.759 "config": [ 00:24:10.759 { 00:24:10.759 "method": "sock_set_default_impl", 00:24:10.759 "params": { 00:24:10.759 "impl_name": "posix" 00:24:10.759 } 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "method": "sock_impl_set_options", 00:24:10.759 "params": { 00:24:10.759 "impl_name": "ssl", 00:24:10.759 "recv_buf_size": 4096, 00:24:10.759 "send_buf_size": 4096, 00:24:10.759 "enable_recv_pipe": true, 00:24:10.759 "enable_quickack": false, 00:24:10.759 "enable_placement_id": 0, 00:24:10.759 "enable_zerocopy_send_server": true, 00:24:10.759 "enable_zerocopy_send_client": false, 00:24:10.759 "zerocopy_threshold": 0, 00:24:10.759 "tls_version": 0, 00:24:10.759 "enable_ktls": false 00:24:10.759 } 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "method": "sock_impl_set_options", 00:24:10.759 "params": { 00:24:10.759 "impl_name": "posix", 00:24:10.759 "recv_buf_size": 2097152, 00:24:10.759 "send_buf_size": 2097152, 00:24:10.759 "enable_recv_pipe": true, 00:24:10.759 "enable_quickack": false, 00:24:10.759 "enable_placement_id": 0, 00:24:10.759 "enable_zerocopy_send_server": true, 00:24:10.759 "enable_zerocopy_send_client": false, 00:24:10.759 "zerocopy_threshold": 0, 00:24:10.759 "tls_version": 0, 00:24:10.759 "enable_ktls": false 00:24:10.759 } 00:24:10.759 } 00:24:10.759 ] 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "subsystem": "vmd", 00:24:10.759 "config": [] 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "subsystem": "accel", 00:24:10.759 "config": [ 00:24:10.759 { 00:24:10.759 "method": "accel_set_options", 00:24:10.759 "params": { 00:24:10.759 "small_cache_size": 128, 00:24:10.759 "large_cache_size": 16, 00:24:10.759 "task_count": 2048, 00:24:10.759 "sequence_count": 2048, 00:24:10.759 "buf_count": 2048 00:24:10.759 } 00:24:10.759 } 00:24:10.759 ] 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "subsystem": "bdev", 00:24:10.759 "config": [ 00:24:10.759 { 00:24:10.759 "method": "bdev_set_options", 00:24:10.759 "params": { 00:24:10.759 "bdev_io_pool_size": 65535, 00:24:10.759 "bdev_io_cache_size": 256, 00:24:10.759 "bdev_auto_examine": true, 00:24:10.759 "iobuf_small_cache_size": 128, 00:24:10.759 "iobuf_large_cache_size": 16 00:24:10.759 } 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "method": "bdev_raid_set_options", 00:24:10.759 "params": { 00:24:10.759 "process_window_size_kb": 1024 00:24:10.759 } 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "method": "bdev_iscsi_set_options", 00:24:10.759 "params": { 00:24:10.759 "timeout_sec": 30 00:24:10.759 } 00:24:10.759 }, 00:24:10.759 { 00:24:10.759 "method": "bdev_nvme_set_options", 00:24:10.759 "params": { 00:24:10.759 "action_on_timeout": "none", 00:24:10.759 "timeout_us": 0, 00:24:10.759 "timeout_admin_us": 0, 00:24:10.759 "keep_alive_timeout_ms": 10000, 00:24:10.759 "arbitration_burst": 0, 00:24:10.759 "low_priority_weight": 0, 00:24:10.759 "medium_priority_weight": 0, 00:24:10.759 "high_priority_weight": 0, 00:24:10.759 "nvme_adminq_poll_period_us": 10000, 00:24:10.759 "nvme_ioq_poll_period_us": 0, 00:24:10.759 "io_queue_requests": 512, 00:24:10.759 "delay_cmd_submit": true, 00:24:10.759 "transport_retry_count": 4, 00:24:10.759 "bdev_retry_count": 3, 00:24:10.759 "transport_ack_timeout": 0, 00:24:10.759 "ctrlr_loss_timeout_sec": 0, 00:24:10.759 "reconnect_delay_sec": 0, 00:24:10.759 "fast_io_fail_timeout_sec": 0, 00:24:10.759 "disable_auto_failback": false, 00:24:10.759 "generate_uuids": false, 00:24:10.760 "transport_tos": 0, 00:24:10.760 "nvme_error_stat": false, 00:24:10.760 "rdma_srq_size": 0, 00:24:10.760 "io_path_stat": false, 00:24:10.760 "allow_accel_sequence": false, 00:24:10.760 "rdma_max_cq_size": 0, 00:24:10.760 "rdma_cm_event_timeout_ms": 0, 00:24:10.760 "dhchap_digests": [ 00:24:10.760 "sha256", 00:24:10.760 "sha384", 00:24:10.760 "sha512" 00:24:10.760 ], 00:24:10.760 "dhchap_dhgroups": [ 00:24:10.760 "null", 00:24:10.760 "ffdhe2048", 00:24:10.760 "ffdhe3072", 00:24:10.760 "ffdhe4096", 00:24:10.760 "ffdhe6144", 00:24:10.760 "ffdhe8192" 00:24:10.760 ] 00:24:10.760 } 00:24:10.760 }, 00:24:10.760 { 00:24:10.760 "method": "bdev_nvme_attach_controller", 00:24:10.760 "params": { 00:24:10.760 "name": "TLSTEST", 00:24:10.760 "trtype": "TCP", 00:24:10.760 "adrfam": "IPv4", 00:24:10.760 "traddr": "10.0.0.2", 00:24:10.760 "trsvcid": "4420", 00:24:10.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.760 "prchk_reftag": false, 00:24:10.760 "prchk_guard": false, 00:24:10.760 "ctrlr_loss_timeout_sec": 0, 00:24:10.760 "reconnect_delay_sec": 0, 00:24:10.760 "fast_io_fail_timeout_sec": 0, 00:24:10.760 "psk": "/tmp/tmp.DRLi7OOOm4", 00:24:10.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.760 "hdgst": false, 00:24:10.760 "ddgst": false 00:24:10.760 } 00:24:10.760 }, 00:24:10.760 { 00:24:10.760 "method": "bdev_nvme_set_hotplug", 00:24:10.760 "params": { 00:24:10.760 "period_us": 100000, 00:24:10.760 "enable": false 00:24:10.760 } 00:24:10.760 }, 00:24:10.760 { 00:24:10.760 "method": "bdev_wait_for_examine" 00:24:10.760 } 00:24:10.760 ] 00:24:10.760 }, 00:24:10.760 { 00:24:10.760 "subsystem": "nbd", 00:24:10.760 "config": [] 00:24:10.760 } 00:24:10.760 ] 00:24:10.760 }' 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 741004 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741004 ']' 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741004 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741004 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741004' 00:24:10.760 killing process with pid 741004 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741004 00:24:10.760 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.760 00:24:10.760 Latency(us) 00:24:10.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.760 =================================================================================================================== 00:24:10.760 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:10.760 [2024-07-13 05:13:17.048707] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:10.760 05:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741004 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 740708 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 740708 ']' 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 740708 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740708 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740708' 00:24:11.694 killing process with pid 740708 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 740708 00:24:11.694 [2024-07-13 05:13:18.038092] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:11.694 05:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 740708 00:24:13.067 05:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:13.067 05:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.067 05:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:13.067 "subsystems": [ 00:24:13.067 { 00:24:13.067 "subsystem": "keyring", 00:24:13.067 "config": [] 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "subsystem": "iobuf", 00:24:13.067 "config": [ 00:24:13.067 { 00:24:13.067 "method": "iobuf_set_options", 00:24:13.067 "params": { 00:24:13.067 "small_pool_count": 8192, 00:24:13.067 "large_pool_count": 1024, 00:24:13.067 "small_bufsize": 8192, 00:24:13.067 "large_bufsize": 135168 00:24:13.067 } 00:24:13.067 } 00:24:13.067 ] 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "subsystem": "sock", 00:24:13.067 "config": [ 00:24:13.067 { 00:24:13.067 "method": "sock_set_default_impl", 00:24:13.067 "params": { 00:24:13.067 "impl_name": "posix" 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "sock_impl_set_options", 00:24:13.067 "params": { 00:24:13.067 "impl_name": "ssl", 00:24:13.067 "recv_buf_size": 4096, 00:24:13.067 "send_buf_size": 4096, 00:24:13.067 "enable_recv_pipe": true, 00:24:13.067 "enable_quickack": false, 00:24:13.067 "enable_placement_id": 0, 00:24:13.067 "enable_zerocopy_send_server": true, 00:24:13.067 "enable_zerocopy_send_client": false, 00:24:13.067 "zerocopy_threshold": 0, 00:24:13.067 "tls_version": 0, 00:24:13.067 "enable_ktls": false 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "sock_impl_set_options", 00:24:13.067 "params": { 00:24:13.067 "impl_name": "posix", 00:24:13.067 "recv_buf_size": 2097152, 00:24:13.067 "send_buf_size": 2097152, 00:24:13.067 "enable_recv_pipe": true, 00:24:13.067 "enable_quickack": false, 00:24:13.067 "enable_placement_id": 0, 00:24:13.067 "enable_zerocopy_send_server": true, 00:24:13.067 "enable_zerocopy_send_client": false, 00:24:13.067 "zerocopy_threshold": 0, 00:24:13.067 "tls_version": 0, 00:24:13.067 "enable_ktls": false 00:24:13.067 } 00:24:13.067 } 00:24:13.067 ] 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "subsystem": "vmd", 00:24:13.067 "config": [] 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "subsystem": "accel", 00:24:13.067 "config": [ 00:24:13.067 { 00:24:13.067 "method": "accel_set_options", 00:24:13.067 "params": { 00:24:13.067 "small_cache_size": 128, 00:24:13.067 "large_cache_size": 16, 00:24:13.067 "task_count": 2048, 00:24:13.067 "sequence_count": 2048, 00:24:13.067 "buf_count": 2048 00:24:13.067 } 00:24:13.067 } 00:24:13.067 ] 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "subsystem": "bdev", 00:24:13.067 "config": [ 00:24:13.067 { 00:24:13.067 "method": "bdev_set_options", 00:24:13.067 "params": { 00:24:13.067 "bdev_io_pool_size": 65535, 00:24:13.067 "bdev_io_cache_size": 256, 00:24:13.067 "bdev_auto_examine": true, 00:24:13.067 "iobuf_small_cache_size": 128, 00:24:13.067 "iobuf_large_cache_size": 16 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "bdev_raid_set_options", 00:24:13.067 "params": { 00:24:13.067 "process_window_size_kb": 1024 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "bdev_iscsi_set_options", 00:24:13.067 "params": { 00:24:13.067 "timeout_sec": 30 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "bdev_nvme_set_options", 00:24:13.067 "params": { 00:24:13.067 "action_on_timeout": "none", 00:24:13.067 "timeout_us": 0, 00:24:13.067 "timeout_admin_us": 0, 00:24:13.067 "keep_alive_timeout_ms": 10000, 00:24:13.067 "arbitration_burst": 0, 00:24:13.067 "low_priority_weight": 0, 00:24:13.067 "medium_priority_weight": 0, 00:24:13.067 "high_priority_weight": 0, 00:24:13.067 "nvme_adminq_poll_period_us": 10000, 00:24:13.067 "nvme_ioq_poll_period_us": 0, 00:24:13.067 "io_queue_requests": 0, 00:24:13.067 "delay_cmd_submit": true, 00:24:13.067 "transport_retry_count": 4, 00:24:13.067 "bdev_retry_count": 3, 00:24:13.067 "transport_ack_timeout": 0, 00:24:13.067 "ctrlr_loss_timeout_sec": 0, 00:24:13.067 "reconnect_delay_sec": 0, 00:24:13.067 "fast_io_fail_timeout_sec": 0, 00:24:13.067 "disable_auto_failback": false, 00:24:13.067 "generate_uuids": false, 00:24:13.067 "transport_tos": 0, 00:24:13.067 "nvme_error_stat": false, 00:24:13.067 "rdma_srq_size": 0, 00:24:13.067 "io_path_stat": false, 00:24:13.067 "allow_accel_sequence": false, 00:24:13.067 "rdma_max_cq_size": 0, 00:24:13.067 "rdma_cm_event_timeout_ms": 0, 00:24:13.067 "dhchap_digests": [ 00:24:13.067 "sha256", 00:24:13.067 "sha384", 00:24:13.067 "sha512" 00:24:13.067 ], 00:24:13.067 "dhchap_dhgroups": [ 00:24:13.067 "null", 00:24:13.067 "ffdhe2048", 00:24:13.067 "ffdhe3072", 00:24:13.067 "ffdhe4096", 00:24:13.067 "ffdhe6144", 00:24:13.067 "ffdhe8192" 00:24:13.067 ] 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.067 "method": "bdev_nvme_set_hotplug", 00:24:13.067 "params": { 00:24:13.067 "period_us": 100000, 00:24:13.067 "enable": false 00:24:13.067 } 00:24:13.067 }, 00:24:13.067 { 00:24:13.068 "method": "bdev_malloc_create", 00:24:13.068 "params": { 00:24:13.068 "name": "malloc0", 00:24:13.068 "num_blocks": 8192, 00:24:13.068 "block_size": 4096, 00:24:13.068 "physical_block_size": 4096, 00:24:13.068 "uuid": "32d1ec0f-68b3-4de2-823b-7fdefb2c3778", 00:24:13.068 "optimal_io_boundary": 0 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "bdev_wait_for_examine" 00:24:13.068 } 00:24:13.068 ] 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "subsystem": "nbd", 00:24:13.068 "config": [] 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "subsystem": "scheduler", 00:24:13.068 "config": [ 00:24:13.068 { 00:24:13.068 "method": "framework_set_scheduler", 00:24:13.068 "params": { 00:24:13.068 "name": "static" 00:24:13.068 } 00:24:13.068 } 00:24:13.068 ] 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "subsystem": "nvmf", 00:24:13.068 "config": [ 00:24:13.068 { 00:24:13.068 "method": "nvmf_set_config", 00:24:13.068 "params": { 00:24:13.068 "discovery_filter": "match_any", 00:24:13.068 "admin_cmd_passthru": { 00:24:13.068 "identify_ctrlr": false 00:24:13.068 } 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_set_max_subsystems", 00:24:13.068 "params": { 00:24:13.068 "max_subsystems": 1024 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_set_crdt", 00:24:13.068 "params": { 00:24:13.068 "crdt1": 0, 00:24:13.068 "crdt2": 0, 00:24:13.068 "crdt3": 0 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_create_transport", 00:24:13.068 "params": { 00:24:13.068 "trtype": "TCP", 00:24:13.068 "max_queue_depth": 128, 00:24:13.068 "max_io_qpairs_per_ctrlr": 127, 00:24:13.068 "in_capsule_data_size": 4096, 00:24:13.068 "max_io_size": 131072, 00:24:13.068 "io_unit_size": 131072, 00:24:13.068 "max_aq_depth": 128, 00:24:13.068 "num_shared_buffers": 511, 00:24:13.068 "buf_cache_size": 4294967295, 00:24:13.068 "dif_insert_or_strip": false, 00:24:13.068 "zcopy": false, 00:24:13.068 "c2h_success": false, 00:24:13.068 "sock_priority": 0, 00:24:13.068 "abort_timeout_sec": 1, 00:24:13.068 "ack_timeout": 0, 00:24:13.068 "data_wr_pool_size": 0 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_create_subsystem", 00:24:13.068 "params": { 00:24:13.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.068 "allow_any_host": false, 00:24:13.068 "serial_number": "SPDK00000000000001", 00:24:13.068 "model_number": "SPDK bdev Controller", 00:24:13.068 "max_namespaces": 10, 00:24:13.068 "min_cntlid": 1, 00:24:13.068 "max_cntlid": 65519, 00:24:13.068 "ana_reporting": false 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_subsystem_add_host", 00:24:13.068 "params": { 00:24:13.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.068 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.068 "psk": "/tmp/tmp.DRLi7OOOm4" 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_subsystem_add_ns", 00:24:13.068 "params": { 00:24:13.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.068 "namespace": { 00:24:13.068 "nsid": 1, 00:24:13.068 "bdev_name": "malloc0", 00:24:13.068 "nguid": "32D1EC0F68B34DE2823B7FDEFB2C3778", 00:24:13.068 "uuid": "32d1ec0f-68b3-4de2-823b-7fdefb2c3778", 00:24:13.068 "no_auto_visible": false 00:24:13.068 } 00:24:13.068 } 00:24:13.068 }, 00:24:13.068 { 00:24:13.068 "method": "nvmf_subsystem_add_listener", 00:24:13.068 "params": { 00:24:13.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.068 "listen_address": { 00:24:13.068 "trtype": "TCP", 00:24:13.068 "adrfam": "IPv4", 00:24:13.068 "traddr": "10.0.0.2", 00:24:13.068 "trsvcid": "4420" 00:24:13.068 }, 00:24:13.068 "secure_channel": true 00:24:13.068 } 00:24:13.068 } 00:24:13.068 ] 00:24:13.068 } 00:24:13.068 ] 00:24:13.068 }' 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=741543 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 741543 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741543 ']' 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.068 05:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.068 [2024-07-13 05:13:19.547115] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:13.068 [2024-07-13 05:13:19.547250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.326 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.326 [2024-07-13 05:13:19.684020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.583 [2024-07-13 05:13:19.940610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.583 [2024-07-13 05:13:19.940674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.583 [2024-07-13 05:13:19.940706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.583 [2024-07-13 05:13:19.940732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.583 [2024-07-13 05:13:19.940755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.583 [2024-07-13 05:13:19.940917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.167 [2024-07-13 05:13:20.480986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.167 [2024-07-13 05:13:20.496962] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:14.167 [2024-07-13 05:13:20.512981] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.167 [2024-07-13 05:13:20.513281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=741698 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 741698 /var/tmp/bdevperf.sock 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741698 ']' 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.167 05:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:14.167 "subsystems": [ 00:24:14.167 { 00:24:14.167 "subsystem": "keyring", 00:24:14.167 "config": [] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "iobuf", 00:24:14.167 "config": [ 00:24:14.167 { 00:24:14.167 "method": "iobuf_set_options", 00:24:14.167 "params": { 00:24:14.167 "small_pool_count": 8192, 00:24:14.167 "large_pool_count": 1024, 00:24:14.167 "small_bufsize": 8192, 00:24:14.167 "large_bufsize": 135168 00:24:14.167 } 00:24:14.167 } 00:24:14.167 ] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "sock", 00:24:14.167 "config": [ 00:24:14.167 { 00:24:14.167 "method": "sock_set_default_impl", 00:24:14.167 "params": { 00:24:14.167 "impl_name": "posix" 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "sock_impl_set_options", 00:24:14.167 "params": { 00:24:14.167 "impl_name": "ssl", 00:24:14.167 "recv_buf_size": 4096, 00:24:14.167 "send_buf_size": 4096, 00:24:14.167 "enable_recv_pipe": true, 00:24:14.167 "enable_quickack": false, 00:24:14.167 "enable_placement_id": 0, 00:24:14.167 "enable_zerocopy_send_server": true, 00:24:14.167 "enable_zerocopy_send_client": false, 00:24:14.167 "zerocopy_threshold": 0, 00:24:14.167 "tls_version": 0, 00:24:14.167 "enable_ktls": false 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "sock_impl_set_options", 00:24:14.167 "params": { 00:24:14.167 "impl_name": "posix", 00:24:14.167 "recv_buf_size": 2097152, 00:24:14.167 "send_buf_size": 2097152, 00:24:14.167 "enable_recv_pipe": true, 00:24:14.167 "enable_quickack": false, 00:24:14.167 "enable_placement_id": 0, 00:24:14.167 "enable_zerocopy_send_server": true, 00:24:14.167 "enable_zerocopy_send_client": false, 00:24:14.167 "zerocopy_threshold": 0, 00:24:14.167 "tls_version": 0, 00:24:14.167 "enable_ktls": false 00:24:14.167 } 00:24:14.167 } 00:24:14.167 ] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "vmd", 00:24:14.167 "config": [] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "accel", 00:24:14.167 "config": [ 00:24:14.167 { 00:24:14.167 "method": "accel_set_options", 00:24:14.167 "params": { 00:24:14.167 "small_cache_size": 128, 00:24:14.167 "large_cache_size": 16, 00:24:14.167 "task_count": 2048, 00:24:14.167 "sequence_count": 2048, 00:24:14.167 "buf_count": 2048 00:24:14.167 } 00:24:14.167 } 00:24:14.167 ] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "bdev", 00:24:14.167 "config": [ 00:24:14.167 { 00:24:14.167 "method": "bdev_set_options", 00:24:14.167 "params": { 00:24:14.167 "bdev_io_pool_size": 65535, 00:24:14.167 "bdev_io_cache_size": 256, 00:24:14.167 "bdev_auto_examine": true, 00:24:14.167 "iobuf_small_cache_size": 128, 00:24:14.167 "iobuf_large_cache_size": 16 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_raid_set_options", 00:24:14.167 "params": { 00:24:14.167 "process_window_size_kb": 1024 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_iscsi_set_options", 00:24:14.167 "params": { 00:24:14.167 "timeout_sec": 30 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_nvme_set_options", 00:24:14.167 "params": { 00:24:14.167 "action_on_timeout": "none", 00:24:14.167 "timeout_us": 0, 00:24:14.167 "timeout_admin_us": 0, 00:24:14.167 "keep_alive_timeout_ms": 10000, 00:24:14.167 "arbitration_burst": 0, 00:24:14.167 "low_priority_weight": 0, 00:24:14.167 "medium_priority_weight": 0, 00:24:14.167 "high_priority_weight": 0, 00:24:14.167 "nvme_adminq_poll_period_us": 10000, 00:24:14.167 "nvme_ioq_poll_period_us": 0, 00:24:14.167 "io_queue_requests": 512, 00:24:14.167 "delay_cmd_submit": true, 00:24:14.167 "transport_retry_count": 4, 00:24:14.167 "bdev_retry_count": 3, 00:24:14.167 "transport_ack_timeout": 0, 00:24:14.167 "ctrlr_loss_timeout_sec": 0, 00:24:14.167 "reconnect_delay_sec": 0, 00:24:14.167 "fast_io_fail_timeout_sec": 0, 00:24:14.167 "disable_auto_failback": false, 00:24:14.167 "generate_uuids": false, 00:24:14.167 "transport_tos": 0, 00:24:14.167 "nvme_error_stat": false, 00:24:14.167 "rdma_srq_size": 0, 00:24:14.167 "io_path_stat": false, 00:24:14.167 "allow_accel_sequence": false, 00:24:14.167 "rdma_max_cq_size": 0, 00:24:14.167 "rdma_cm_event_timeout_ms": 0, 00:24:14.167 "dhchap_digests": [ 00:24:14.167 "sha256", 00:24:14.167 "sha384", 00:24:14.167 "sha512" 00:24:14.167 ], 00:24:14.167 "dhchap_dhgroups": [ 00:24:14.167 "null", 00:24:14.167 "ffdhe2048", 00:24:14.167 "ffdhe3072", 00:24:14.167 "ffdhe4096", 00:24:14.167 "ffdhe6144", 00:24:14.167 "ffdhe8192" 00:24:14.167 ] 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_nvme_attach_controller", 00:24:14.167 "params": { 00:24:14.167 "name": "TLSTEST", 00:24:14.167 "trtype": "TCP", 00:24:14.167 "adrfam": "IPv4", 00:24:14.167 "traddr": "10.0.0.2", 00:24:14.167 "trsvcid": "4420", 00:24:14.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.167 "prchk_reftag": false, 00:24:14.167 "prchk_guard": false, 00:24:14.167 "ctrlr_loss_timeout_sec": 0, 00:24:14.167 "reconnect_delay_sec": 0, 00:24:14.167 "fast_io_fail_timeout_sec": 0, 00:24:14.167 "psk": "/tmp/tmp.DRLi7OOOm4", 00:24:14.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.167 "hdgst": false, 00:24:14.167 "ddgst": false 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_nvme_set_hotplug", 00:24:14.167 "params": { 00:24:14.167 "period_us": 100000, 00:24:14.167 "enable": false 00:24:14.167 } 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "method": "bdev_wait_for_examine" 00:24:14.167 } 00:24:14.167 ] 00:24:14.167 }, 00:24:14.167 { 00:24:14.167 "subsystem": "nbd", 00:24:14.167 "config": [] 00:24:14.168 } 00:24:14.168 ] 00:24:14.168 }' 00:24:14.168 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.168 05:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.168 [2024-07-13 05:13:20.648045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:14.168 [2024-07-13 05:13:20.648226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741698 ] 00:24:14.439 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.439 [2024-07-13 05:13:20.774576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.697 [2024-07-13 05:13:21.005633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.955 [2024-07-13 05:13:21.395098] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.955 [2024-07-13 05:13:21.395266] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:15.212 05:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.212 05:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:15.212 05:13:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:15.470 Running I/O for 10 seconds... 00:24:25.432 00:24:25.432 Latency(us) 00:24:25.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.432 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:25.432 Verification LBA range: start 0x0 length 0x2000 00:24:25.432 TLSTESTn1 : 10.03 2394.45 9.35 0.00 0.00 53352.06 9272.13 55147.33 00:24:25.433 =================================================================================================================== 00:24:25.433 Total : 2394.45 9.35 0.00 0.00 53352.06 9272.13 55147.33 00:24:25.433 0 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 741698 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741698 ']' 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741698 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741698 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741698' 00:24:25.433 killing process with pid 741698 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741698 00:24:25.433 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.433 00:24:25.433 Latency(us) 00:24:25.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.433 =================================================================================================================== 00:24:25.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.433 [2024-07-13 05:13:31.826311] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:25.433 05:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741698 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 741543 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741543 ']' 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741543 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741543 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741543' 00:24:26.367 killing process with pid 741543 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741543 00:24:26.367 [2024-07-13 05:13:32.845316] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:26.367 05:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741543 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=743299 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 743299 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 743299 ']' 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.263 05:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.263 [2024-07-13 05:13:34.375640] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:28.263 [2024-07-13 05:13:34.375775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.263 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.263 [2024-07-13 05:13:34.536761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.521 [2024-07-13 05:13:34.776589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.521 [2024-07-13 05:13:34.776656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.521 [2024-07-13 05:13:34.776686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.521 [2024-07-13 05:13:34.776711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.521 [2024-07-13 05:13:34.776737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.521 [2024-07-13 05:13:34.776783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.085 05:13:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.086 05:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.DRLi7OOOm4 00:24:29.086 05:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DRLi7OOOm4 00:24:29.086 05:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:29.344 [2024-07-13 05:13:35.666856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.344 05:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:29.601 05:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:29.859 [2024-07-13 05:13:36.208477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.859 [2024-07-13 05:13:36.208793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.859 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:30.116 malloc0 00:24:30.116 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:30.373 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DRLi7OOOm4 00:24:30.630 [2024-07-13 05:13:36.967794] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=743586 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 743586 /var/tmp/bdevperf.sock 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 743586 ']' 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.630 05:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.630 [2024-07-13 05:13:37.062764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:30.630 [2024-07-13 05:13:37.062925] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743586 ] 00:24:30.630 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.887 [2024-07-13 05:13:37.182386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.144 [2024-07-13 05:13:37.432423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.710 05:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.710 05:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:31.710 05:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DRLi7OOOm4 00:24:31.968 05:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:31.968 [2024-07-13 05:13:38.447908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.227 nvme0n1 00:24:32.227 05:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.227 Running I/O for 1 seconds... 00:24:33.600 00:24:33.600 Latency(us) 00:24:33.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.600 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:33.600 Verification LBA range: start 0x0 length 0x2000 00:24:33.600 nvme0n1 : 1.04 2549.13 9.96 0.00 0.00 49385.43 10631.40 50098.63 00:24:33.600 =================================================================================================================== 00:24:33.600 Total : 2549.13 9.96 0.00 0.00 49385.43 10631.40 50098.63 00:24:33.600 0 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 743586 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 743586 ']' 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 743586 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743586 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743586' 00:24:33.600 killing process with pid 743586 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 743586 00:24:33.600 Received shutdown signal, test time was about 1.000000 seconds 00:24:33.600 00:24:33.600 Latency(us) 00:24:33.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.600 =================================================================================================================== 00:24:33.600 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.600 05:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 743586 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 743299 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 743299 ']' 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 743299 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743299 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743299' 00:24:34.542 killing process with pid 743299 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 743299 00:24:34.542 [2024-07-13 05:13:40.825052] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:34.542 05:13:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 743299 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=744256 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 744256 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 744256 ']' 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.915 05:13:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.915 [2024-07-13 05:13:42.261562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:35.915 [2024-07-13 05:13:42.261707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.915 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.915 [2024-07-13 05:13:42.402408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.174 [2024-07-13 05:13:42.658552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.174 [2024-07-13 05:13:42.658644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.174 [2024-07-13 05:13:42.658674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.174 [2024-07-13 05:13:42.658701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.174 [2024-07-13 05:13:42.658724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.174 [2024-07-13 05:13:42.658778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.741 05:13:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:36.742 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.742 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.742 [2024-07-13 05:13:43.219312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.001 malloc0 00:24:37.001 [2024-07-13 05:13:43.290083] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.001 [2024-07-13 05:13:43.290441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=744414 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 744414 /var/tmp/bdevperf.sock 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 744414 ']' 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.001 05:13:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 [2024-07-13 05:13:43.395820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:37.001 [2024-07-13 05:13:43.395999] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744414 ] 00:24:37.001 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.259 [2024-07-13 05:13:43.526075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.518 [2024-07-13 05:13:43.777311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.083 05:13:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.083 05:13:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:38.083 05:13:44 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DRLi7OOOm4 00:24:38.083 05:13:44 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:38.340 [2024-07-13 05:13:44.782073] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.609 nvme0n1 00:24:38.609 05:13:44 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.609 Running I/O for 1 seconds... 00:24:39.986 00:24:39.986 Latency(us) 00:24:39.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.986 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:39.986 Verification LBA range: start 0x0 length 0x2000 00:24:39.986 nvme0n1 : 1.05 2113.36 8.26 0.00 0.00 59123.51 10437.21 66021.45 00:24:39.986 =================================================================================================================== 00:24:39.986 Total : 2113.36 8.26 0.00 0.00 59123.51 10437.21 66021.45 00:24:39.986 0 00:24:39.986 05:13:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:39.986 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.986 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.986 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.986 05:13:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:39.986 "subsystems": [ 00:24:39.986 { 00:24:39.986 "subsystem": "keyring", 00:24:39.986 "config": [ 00:24:39.986 { 00:24:39.986 "method": "keyring_file_add_key", 00:24:39.986 "params": { 00:24:39.986 "name": "key0", 00:24:39.986 "path": "/tmp/tmp.DRLi7OOOm4" 00:24:39.986 } 00:24:39.986 } 00:24:39.986 ] 00:24:39.986 }, 00:24:39.986 { 00:24:39.986 "subsystem": "iobuf", 00:24:39.986 "config": [ 00:24:39.986 { 00:24:39.986 "method": "iobuf_set_options", 00:24:39.986 "params": { 00:24:39.986 "small_pool_count": 8192, 00:24:39.986 "large_pool_count": 1024, 00:24:39.986 "small_bufsize": 8192, 00:24:39.986 "large_bufsize": 135168 00:24:39.986 } 00:24:39.986 } 00:24:39.986 ] 00:24:39.986 }, 00:24:39.986 { 00:24:39.986 "subsystem": "sock", 00:24:39.986 "config": [ 00:24:39.986 { 00:24:39.986 "method": "sock_set_default_impl", 00:24:39.986 "params": { 00:24:39.986 "impl_name": "posix" 00:24:39.986 } 00:24:39.986 }, 00:24:39.986 { 00:24:39.986 "method": "sock_impl_set_options", 00:24:39.986 "params": { 00:24:39.986 "impl_name": "ssl", 00:24:39.986 "recv_buf_size": 4096, 00:24:39.986 "send_buf_size": 4096, 00:24:39.986 "enable_recv_pipe": true, 00:24:39.986 "enable_quickack": false, 00:24:39.986 "enable_placement_id": 0, 00:24:39.986 "enable_zerocopy_send_server": true, 00:24:39.986 "enable_zerocopy_send_client": false, 00:24:39.986 "zerocopy_threshold": 0, 00:24:39.986 "tls_version": 0, 00:24:39.986 "enable_ktls": false 00:24:39.986 } 00:24:39.986 }, 00:24:39.986 { 00:24:39.986 "method": "sock_impl_set_options", 00:24:39.986 "params": { 00:24:39.986 "impl_name": "posix", 00:24:39.986 "recv_buf_size": 2097152, 00:24:39.986 "send_buf_size": 2097152, 00:24:39.986 "enable_recv_pipe": true, 00:24:39.986 "enable_quickack": false, 00:24:39.986 "enable_placement_id": 0, 00:24:39.986 "enable_zerocopy_send_server": true, 00:24:39.986 "enable_zerocopy_send_client": false, 00:24:39.986 "zerocopy_threshold": 0, 00:24:39.986 "tls_version": 0, 00:24:39.986 "enable_ktls": false 00:24:39.986 } 00:24:39.986 } 00:24:39.986 ] 00:24:39.986 }, 00:24:39.986 { 00:24:39.986 "subsystem": "vmd", 00:24:39.986 "config": [] 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "subsystem": "accel", 00:24:39.987 "config": [ 00:24:39.987 { 00:24:39.987 "method": "accel_set_options", 00:24:39.987 "params": { 00:24:39.987 "small_cache_size": 128, 00:24:39.987 "large_cache_size": 16, 00:24:39.987 "task_count": 2048, 00:24:39.987 "sequence_count": 2048, 00:24:39.987 "buf_count": 2048 00:24:39.987 } 00:24:39.987 } 00:24:39.987 ] 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "subsystem": "bdev", 00:24:39.987 "config": [ 00:24:39.987 { 00:24:39.987 "method": "bdev_set_options", 00:24:39.987 "params": { 00:24:39.987 "bdev_io_pool_size": 65535, 00:24:39.987 "bdev_io_cache_size": 256, 00:24:39.987 "bdev_auto_examine": true, 00:24:39.987 "iobuf_small_cache_size": 128, 00:24:39.987 "iobuf_large_cache_size": 16 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_raid_set_options", 00:24:39.987 "params": { 00:24:39.987 "process_window_size_kb": 1024 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_iscsi_set_options", 00:24:39.987 "params": { 00:24:39.987 "timeout_sec": 30 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_nvme_set_options", 00:24:39.987 "params": { 00:24:39.987 "action_on_timeout": "none", 00:24:39.987 "timeout_us": 0, 00:24:39.987 "timeout_admin_us": 0, 00:24:39.987 "keep_alive_timeout_ms": 10000, 00:24:39.987 "arbitration_burst": 0, 00:24:39.987 "low_priority_weight": 0, 00:24:39.987 "medium_priority_weight": 0, 00:24:39.987 "high_priority_weight": 0, 00:24:39.987 "nvme_adminq_poll_period_us": 10000, 00:24:39.987 "nvme_ioq_poll_period_us": 0, 00:24:39.987 "io_queue_requests": 0, 00:24:39.987 "delay_cmd_submit": true, 00:24:39.987 "transport_retry_count": 4, 00:24:39.987 "bdev_retry_count": 3, 00:24:39.987 "transport_ack_timeout": 0, 00:24:39.987 "ctrlr_loss_timeout_sec": 0, 00:24:39.987 "reconnect_delay_sec": 0, 00:24:39.987 "fast_io_fail_timeout_sec": 0, 00:24:39.987 "disable_auto_failback": false, 00:24:39.987 "generate_uuids": false, 00:24:39.987 "transport_tos": 0, 00:24:39.987 "nvme_error_stat": false, 00:24:39.987 "rdma_srq_size": 0, 00:24:39.987 "io_path_stat": false, 00:24:39.987 "allow_accel_sequence": false, 00:24:39.987 "rdma_max_cq_size": 0, 00:24:39.987 "rdma_cm_event_timeout_ms": 0, 00:24:39.987 "dhchap_digests": [ 00:24:39.987 "sha256", 00:24:39.987 "sha384", 00:24:39.987 "sha512" 00:24:39.987 ], 00:24:39.987 "dhchap_dhgroups": [ 00:24:39.987 "null", 00:24:39.987 "ffdhe2048", 00:24:39.987 "ffdhe3072", 00:24:39.987 "ffdhe4096", 00:24:39.987 "ffdhe6144", 00:24:39.987 "ffdhe8192" 00:24:39.987 ] 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_nvme_set_hotplug", 00:24:39.987 "params": { 00:24:39.987 "period_us": 100000, 00:24:39.987 "enable": false 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_malloc_create", 00:24:39.987 "params": { 00:24:39.987 "name": "malloc0", 00:24:39.987 "num_blocks": 8192, 00:24:39.987 "block_size": 4096, 00:24:39.987 "physical_block_size": 4096, 00:24:39.987 "uuid": "456398a3-1a2c-470b-89da-0328b0f1d0f8", 00:24:39.987 "optimal_io_boundary": 0 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "bdev_wait_for_examine" 00:24:39.987 } 00:24:39.987 ] 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "subsystem": "nbd", 00:24:39.987 "config": [] 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "subsystem": "scheduler", 00:24:39.987 "config": [ 00:24:39.987 { 00:24:39.987 "method": "framework_set_scheduler", 00:24:39.987 "params": { 00:24:39.987 "name": "static" 00:24:39.987 } 00:24:39.987 } 00:24:39.987 ] 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "subsystem": "nvmf", 00:24:39.987 "config": [ 00:24:39.987 { 00:24:39.987 "method": "nvmf_set_config", 00:24:39.987 "params": { 00:24:39.987 "discovery_filter": "match_any", 00:24:39.987 "admin_cmd_passthru": { 00:24:39.987 "identify_ctrlr": false 00:24:39.987 } 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_set_max_subsystems", 00:24:39.987 "params": { 00:24:39.987 "max_subsystems": 1024 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_set_crdt", 00:24:39.987 "params": { 00:24:39.987 "crdt1": 0, 00:24:39.987 "crdt2": 0, 00:24:39.987 "crdt3": 0 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_create_transport", 00:24:39.987 "params": { 00:24:39.987 "trtype": "TCP", 00:24:39.987 "max_queue_depth": 128, 00:24:39.987 "max_io_qpairs_per_ctrlr": 127, 00:24:39.987 "in_capsule_data_size": 4096, 00:24:39.987 "max_io_size": 131072, 00:24:39.987 "io_unit_size": 131072, 00:24:39.987 "max_aq_depth": 128, 00:24:39.987 "num_shared_buffers": 511, 00:24:39.987 "buf_cache_size": 4294967295, 00:24:39.987 "dif_insert_or_strip": false, 00:24:39.987 "zcopy": false, 00:24:39.987 "c2h_success": false, 00:24:39.987 "sock_priority": 0, 00:24:39.987 "abort_timeout_sec": 1, 00:24:39.987 "ack_timeout": 0, 00:24:39.987 "data_wr_pool_size": 0 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_create_subsystem", 00:24:39.987 "params": { 00:24:39.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.987 "allow_any_host": false, 00:24:39.987 "serial_number": "00000000000000000000", 00:24:39.987 "model_number": "SPDK bdev Controller", 00:24:39.987 "max_namespaces": 32, 00:24:39.987 "min_cntlid": 1, 00:24:39.987 "max_cntlid": 65519, 00:24:39.987 "ana_reporting": false 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_subsystem_add_host", 00:24:39.987 "params": { 00:24:39.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.987 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.987 "psk": "key0" 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_subsystem_add_ns", 00:24:39.987 "params": { 00:24:39.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.987 "namespace": { 00:24:39.987 "nsid": 1, 00:24:39.987 "bdev_name": "malloc0", 00:24:39.987 "nguid": "456398A31A2C470B89DA0328B0F1D0F8", 00:24:39.987 "uuid": "456398a3-1a2c-470b-89da-0328b0f1d0f8", 00:24:39.987 "no_auto_visible": false 00:24:39.987 } 00:24:39.987 } 00:24:39.987 }, 00:24:39.987 { 00:24:39.987 "method": "nvmf_subsystem_add_listener", 00:24:39.987 "params": { 00:24:39.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.987 "listen_address": { 00:24:39.987 "trtype": "TCP", 00:24:39.987 "adrfam": "IPv4", 00:24:39.987 "traddr": "10.0.0.2", 00:24:39.987 "trsvcid": "4420" 00:24:39.987 }, 00:24:39.987 "secure_channel": true 00:24:39.987 } 00:24:39.987 } 00:24:39.987 ] 00:24:39.987 } 00:24:39.987 ] 00:24:39.987 }' 00:24:39.987 05:13:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:40.246 05:13:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:40.246 "subsystems": [ 00:24:40.246 { 00:24:40.246 "subsystem": "keyring", 00:24:40.246 "config": [ 00:24:40.246 { 00:24:40.246 "method": "keyring_file_add_key", 00:24:40.246 "params": { 00:24:40.246 "name": "key0", 00:24:40.246 "path": "/tmp/tmp.DRLi7OOOm4" 00:24:40.246 } 00:24:40.246 } 00:24:40.246 ] 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "subsystem": "iobuf", 00:24:40.246 "config": [ 00:24:40.246 { 00:24:40.246 "method": "iobuf_set_options", 00:24:40.246 "params": { 00:24:40.246 "small_pool_count": 8192, 00:24:40.246 "large_pool_count": 1024, 00:24:40.246 "small_bufsize": 8192, 00:24:40.246 "large_bufsize": 135168 00:24:40.246 } 00:24:40.246 } 00:24:40.246 ] 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "subsystem": "sock", 00:24:40.246 "config": [ 00:24:40.246 { 00:24:40.246 "method": "sock_set_default_impl", 00:24:40.246 "params": { 00:24:40.246 "impl_name": "posix" 00:24:40.246 } 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "method": "sock_impl_set_options", 00:24:40.246 "params": { 00:24:40.246 "impl_name": "ssl", 00:24:40.246 "recv_buf_size": 4096, 00:24:40.246 "send_buf_size": 4096, 00:24:40.246 "enable_recv_pipe": true, 00:24:40.246 "enable_quickack": false, 00:24:40.246 "enable_placement_id": 0, 00:24:40.246 "enable_zerocopy_send_server": true, 00:24:40.246 "enable_zerocopy_send_client": false, 00:24:40.246 "zerocopy_threshold": 0, 00:24:40.246 "tls_version": 0, 00:24:40.246 "enable_ktls": false 00:24:40.246 } 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "method": "sock_impl_set_options", 00:24:40.246 "params": { 00:24:40.246 "impl_name": "posix", 00:24:40.246 "recv_buf_size": 2097152, 00:24:40.246 "send_buf_size": 2097152, 00:24:40.246 "enable_recv_pipe": true, 00:24:40.246 "enable_quickack": false, 00:24:40.246 "enable_placement_id": 0, 00:24:40.246 "enable_zerocopy_send_server": true, 00:24:40.246 "enable_zerocopy_send_client": false, 00:24:40.246 "zerocopy_threshold": 0, 00:24:40.246 "tls_version": 0, 00:24:40.246 "enable_ktls": false 00:24:40.246 } 00:24:40.246 } 00:24:40.246 ] 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "subsystem": "vmd", 00:24:40.246 "config": [] 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "subsystem": "accel", 00:24:40.246 "config": [ 00:24:40.246 { 00:24:40.246 "method": "accel_set_options", 00:24:40.246 "params": { 00:24:40.246 "small_cache_size": 128, 00:24:40.246 "large_cache_size": 16, 00:24:40.246 "task_count": 2048, 00:24:40.246 "sequence_count": 2048, 00:24:40.246 "buf_count": 2048 00:24:40.246 } 00:24:40.246 } 00:24:40.246 ] 00:24:40.246 }, 00:24:40.246 { 00:24:40.246 "subsystem": "bdev", 00:24:40.246 "config": [ 00:24:40.246 { 00:24:40.246 "method": "bdev_set_options", 00:24:40.246 "params": { 00:24:40.246 "bdev_io_pool_size": 65535, 00:24:40.246 "bdev_io_cache_size": 256, 00:24:40.247 "bdev_auto_examine": true, 00:24:40.247 "iobuf_small_cache_size": 128, 00:24:40.247 "iobuf_large_cache_size": 16 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_raid_set_options", 00:24:40.247 "params": { 00:24:40.247 "process_window_size_kb": 1024 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_iscsi_set_options", 00:24:40.247 "params": { 00:24:40.247 "timeout_sec": 30 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_nvme_set_options", 00:24:40.247 "params": { 00:24:40.247 "action_on_timeout": "none", 00:24:40.247 "timeout_us": 0, 00:24:40.247 "timeout_admin_us": 0, 00:24:40.247 "keep_alive_timeout_ms": 10000, 00:24:40.247 "arbitration_burst": 0, 00:24:40.247 "low_priority_weight": 0, 00:24:40.247 "medium_priority_weight": 0, 00:24:40.247 "high_priority_weight": 0, 00:24:40.247 "nvme_adminq_poll_period_us": 10000, 00:24:40.247 "nvme_ioq_poll_period_us": 0, 00:24:40.247 "io_queue_requests": 512, 00:24:40.247 "delay_cmd_submit": true, 00:24:40.247 "transport_retry_count": 4, 00:24:40.247 "bdev_retry_count": 3, 00:24:40.247 "transport_ack_timeout": 0, 00:24:40.247 "ctrlr_loss_timeout_sec": 0, 00:24:40.247 "reconnect_delay_sec": 0, 00:24:40.247 "fast_io_fail_timeout_sec": 0, 00:24:40.247 "disable_auto_failback": false, 00:24:40.247 "generate_uuids": false, 00:24:40.247 "transport_tos": 0, 00:24:40.247 "nvme_error_stat": false, 00:24:40.247 "rdma_srq_size": 0, 00:24:40.247 "io_path_stat": false, 00:24:40.247 "allow_accel_sequence": false, 00:24:40.247 "rdma_max_cq_size": 0, 00:24:40.247 "rdma_cm_event_timeout_ms": 0, 00:24:40.247 "dhchap_digests": [ 00:24:40.247 "sha256", 00:24:40.247 "sha384", 00:24:40.247 "sha512" 00:24:40.247 ], 00:24:40.247 "dhchap_dhgroups": [ 00:24:40.247 "null", 00:24:40.247 "ffdhe2048", 00:24:40.247 "ffdhe3072", 00:24:40.247 "ffdhe4096", 00:24:40.247 "ffdhe6144", 00:24:40.247 "ffdhe8192" 00:24:40.247 ] 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_nvme_attach_controller", 00:24:40.247 "params": { 00:24:40.247 "name": "nvme0", 00:24:40.247 "trtype": "TCP", 00:24:40.247 "adrfam": "IPv4", 00:24:40.247 "traddr": "10.0.0.2", 00:24:40.247 "trsvcid": "4420", 00:24:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.247 "prchk_reftag": false, 00:24:40.247 "prchk_guard": false, 00:24:40.247 "ctrlr_loss_timeout_sec": 0, 00:24:40.247 "reconnect_delay_sec": 0, 00:24:40.247 "fast_io_fail_timeout_sec": 0, 00:24:40.247 "psk": "key0", 00:24:40.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.247 "hdgst": false, 00:24:40.247 "ddgst": false 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_nvme_set_hotplug", 00:24:40.247 "params": { 00:24:40.247 "period_us": 100000, 00:24:40.247 "enable": false 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_enable_histogram", 00:24:40.247 "params": { 00:24:40.247 "name": "nvme0n1", 00:24:40.247 "enable": true 00:24:40.247 } 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "method": "bdev_wait_for_examine" 00:24:40.247 } 00:24:40.247 ] 00:24:40.247 }, 00:24:40.247 { 00:24:40.247 "subsystem": "nbd", 00:24:40.247 "config": [] 00:24:40.247 } 00:24:40.247 ] 00:24:40.247 }' 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 744414 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 744414 ']' 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 744414 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744414 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744414' 00:24:40.247 killing process with pid 744414 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 744414 00:24:40.247 Received shutdown signal, test time was about 1.000000 seconds 00:24:40.247 00:24:40.247 Latency(us) 00:24:40.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.247 =================================================================================================================== 00:24:40.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.247 05:13:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 744414 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 744256 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 744256 ']' 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 744256 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744256 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744256' 00:24:41.182 killing process with pid 744256 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 744256 00:24:41.182 05:13:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 744256 00:24:42.557 05:13:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:42.557 05:13:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.557 05:13:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:42.557 "subsystems": [ 00:24:42.557 { 00:24:42.557 "subsystem": "keyring", 00:24:42.557 "config": [ 00:24:42.557 { 00:24:42.557 "method": "keyring_file_add_key", 00:24:42.557 "params": { 00:24:42.557 "name": "key0", 00:24:42.557 "path": "/tmp/tmp.DRLi7OOOm4" 00:24:42.557 } 00:24:42.557 } 00:24:42.557 ] 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "subsystem": "iobuf", 00:24:42.557 "config": [ 00:24:42.557 { 00:24:42.557 "method": "iobuf_set_options", 00:24:42.557 "params": { 00:24:42.557 "small_pool_count": 8192, 00:24:42.557 "large_pool_count": 1024, 00:24:42.557 "small_bufsize": 8192, 00:24:42.557 "large_bufsize": 135168 00:24:42.557 } 00:24:42.557 } 00:24:42.557 ] 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "subsystem": "sock", 00:24:42.557 "config": [ 00:24:42.557 { 00:24:42.557 "method": "sock_set_default_impl", 00:24:42.557 "params": { 00:24:42.557 "impl_name": "posix" 00:24:42.557 } 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "method": "sock_impl_set_options", 00:24:42.557 "params": { 00:24:42.557 "impl_name": "ssl", 00:24:42.557 "recv_buf_size": 4096, 00:24:42.557 "send_buf_size": 4096, 00:24:42.557 "enable_recv_pipe": true, 00:24:42.557 "enable_quickack": false, 00:24:42.557 "enable_placement_id": 0, 00:24:42.557 "enable_zerocopy_send_server": true, 00:24:42.557 "enable_zerocopy_send_client": false, 00:24:42.557 "zerocopy_threshold": 0, 00:24:42.557 "tls_version": 0, 00:24:42.557 "enable_ktls": false 00:24:42.557 } 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "method": "sock_impl_set_options", 00:24:42.557 "params": { 00:24:42.557 "impl_name": "posix", 00:24:42.557 "recv_buf_size": 2097152, 00:24:42.557 "send_buf_size": 2097152, 00:24:42.557 "enable_recv_pipe": true, 00:24:42.557 "enable_quickack": false, 00:24:42.557 "enable_placement_id": 0, 00:24:42.557 "enable_zerocopy_send_server": true, 00:24:42.557 "enable_zerocopy_send_client": false, 00:24:42.557 "zerocopy_threshold": 0, 00:24:42.557 "tls_version": 0, 00:24:42.557 "enable_ktls": false 00:24:42.557 } 00:24:42.557 } 00:24:42.557 ] 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "subsystem": "vmd", 00:24:42.557 "config": [] 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "subsystem": "accel", 00:24:42.557 "config": [ 00:24:42.557 { 00:24:42.557 "method": "accel_set_options", 00:24:42.557 "params": { 00:24:42.557 "small_cache_size": 128, 00:24:42.557 "large_cache_size": 16, 00:24:42.557 "task_count": 2048, 00:24:42.557 "sequence_count": 2048, 00:24:42.557 "buf_count": 2048 00:24:42.557 } 00:24:42.557 } 00:24:42.557 ] 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "subsystem": "bdev", 00:24:42.557 "config": [ 00:24:42.557 { 00:24:42.557 "method": "bdev_set_options", 00:24:42.557 "params": { 00:24:42.557 "bdev_io_pool_size": 65535, 00:24:42.557 "bdev_io_cache_size": 256, 00:24:42.557 "bdev_auto_examine": true, 00:24:42.557 "iobuf_small_cache_size": 128, 00:24:42.557 "iobuf_large_cache_size": 16 00:24:42.557 } 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "method": "bdev_raid_set_options", 00:24:42.557 "params": { 00:24:42.557 "process_window_size_kb": 1024 00:24:42.557 } 00:24:42.557 }, 00:24:42.557 { 00:24:42.557 "method": "bdev_iscsi_set_options", 00:24:42.557 "params": { 00:24:42.557 "timeout_sec": 30 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "bdev_nvme_set_options", 00:24:42.558 "params": { 00:24:42.558 "action_on_timeout": "none", 00:24:42.558 "timeout_us": 0, 00:24:42.558 "timeout_admin_us": 0, 00:24:42.558 "keep_alive_timeout_ms": 10000, 00:24:42.558 "arbitration_burst": 0, 00:24:42.558 "low_priority_weight": 0, 00:24:42.558 "medium_priority_weight": 0, 00:24:42.558 "high_priority_weight": 0, 00:24:42.558 "nvme_adminq_poll_period_us": 10000, 00:24:42.558 "nvme_ioq_poll_period_us": 0, 00:24:42.558 "io_queue_requests": 0, 00:24:42.558 "delay_cmd_submit": true, 00:24:42.558 "transport_retry_count": 4, 00:24:42.558 "bdev_retry_count": 3, 00:24:42.558 "transport_ack_timeout": 0, 00:24:42.558 "ctrlr_loss_timeout_sec": 0, 00:24:42.558 "reconnect_delay_sec": 0, 00:24:42.558 "fast_io_fail_timeout_sec": 0, 00:24:42.558 "disable_auto_failback": false, 00:24:42.558 "generate_uuids": false, 00:24:42.558 "transport_tos": 0, 00:24:42.558 "nvme_error_stat": false, 00:24:42.558 "rdma_srq_size": 0, 00:24:42.558 "io_path_stat": false, 00:24:42.558 "allow_accel_sequence": false, 00:24:42.558 "rdma_max_cq_size": 0, 00:24:42.558 "rdma_cm_event_timeout_ms": 0, 00:24:42.558 "dhchap_digests": [ 00:24:42.558 "sha256", 00:24:42.558 "sha384", 00:24:42.558 "sha512" 00:24:42.558 ], 00:24:42.558 "dhchap_dhgroups": [ 00:24:42.558 "null", 00:24:42.558 "ffdhe2048", 00:24:42.558 "ffdhe3072", 00:24:42.558 "ffdhe4096", 00:24:42.558 "ffdhe6144", 00:24:42.558 "ffdhe8192" 00:24:42.558 ] 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "bdev_nvme_set_hotplug", 00:24:42.558 "params": { 00:24:42.558 "period_us": 100000, 00:24:42.558 "enable": false 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "bdev_malloc_create", 00:24:42.558 "params": { 00:24:42.558 "name": "malloc0", 00:24:42.558 "num_blocks": 8192, 00:24:42.558 "block_size": 4096, 00:24:42.558 "physical_block_size": 4096, 00:24:42.558 "uuid": "456398a3-1a2c-470b-89da-0328b0f1d0f8", 00:24:42.558 "optimal_io_boundary": 0 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "bdev_wait_for_examine" 00:24:42.558 } 00:24:42.558 ] 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "subsystem": "nbd", 00:24:42.558 "config": [] 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "subsystem": "scheduler", 00:24:42.558 "config": [ 00:24:42.558 { 00:24:42.558 "method": "framework_set_scheduler", 00:24:42.558 "params": { 00:24:42.558 "name": "static" 00:24:42.558 } 00:24:42.558 } 00:24:42.558 ] 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "subsystem": "nvmf", 00:24:42.558 "config": [ 00:24:42.558 { 00:24:42.558 "method": "nvmf_set_config", 00:24:42.558 "params": { 00:24:42.558 "discovery_filter": "match_any", 00:24:42.558 "admin_cmd_passthru": { 00:24:42.558 "identify_ctrlr": false 00:24:42.558 } 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_set_max_subsystems", 00:24:42.558 "params": { 00:24:42.558 "max_subsystems": 1024 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_set_crdt", 00:24:42.558 "params": { 00:24:42.558 "crdt1": 0, 00:24:42.558 "crdt2": 0, 00:24:42.558 "crdt3": 0 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_create_transport", 00:24:42.558 "params": { 00:24:42.558 "trtype": "TCP", 00:24:42.558 "max_queue_depth": 128, 00:24:42.558 "max_io_qpairs_per_ctrlr": 127, 00:24:42.558 "in_capsule_data_size": 4096, 00:24:42.558 "max_io_size": 131072, 00:24:42.558 "io_unit_size": 131072, 00:24:42.558 "max_aq_depth": 128, 00:24:42.558 "num_shared_buffers": 511, 00:24:42.558 "buf_cache_size": 4294967295, 00:24:42.558 "dif_insert_or_strip": false, 00:24:42.558 "zcopy": false, 00:24:42.558 "c2h_success": false, 00:24:42.558 "sock_priority": 0, 00:24:42.558 "abort_timeout_sec": 1, 00:24:42.558 "ack_timeout": 0, 00:24:42.558 "data_wr_pool_size": 0 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_create_subsystem", 00:24:42.558 "params": { 00:24:42.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.558 "allow_any_host": false, 00:24:42.558 "serial_number": "00000000000000000000", 00:24:42.558 "model_number": "SPDK bdev Controller", 00:24:42.558 "max_namespaces": 32, 00:24:42.558 "min_cntlid": 1, 00:24:42.558 "max_cntlid": 65519, 00:24:42.558 "ana_reporting": false 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_subsystem_add_host", 00:24:42.558 "params": { 00:24:42.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.558 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.558 "psk": "key0" 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_subsystem_add_ns", 00:24:42.558 "params": { 00:24:42.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.558 "namespace": { 00:24:42.558 "nsid": 1, 00:24:42.558 "bdev_name": "malloc0", 00:24:42.558 "nguid": "456398A31A2C470B89DA0328B0F1D0F8", 00:24:42.558 "uuid": "456398a3-1a2c-470b-89da-0328b0f1d0f8", 00:24:42.558 "no_auto_visible": false 00:24:42.558 } 00:24:42.558 } 00:24:42.558 }, 00:24:42.558 { 00:24:42.558 "method": "nvmf_subsystem_add_listener", 00:24:42.558 "params": { 00:24:42.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.558 "listen_address": { 00:24:42.558 "trtype": "TCP", 00:24:42.558 "adrfam": "IPv4", 00:24:42.558 "traddr": "10.0.0.2", 00:24:42.558 "trsvcid": "4420" 00:24:42.558 }, 00:24:42.558 "secure_channel": true 00:24:42.558 } 00:24:42.558 } 00:24:42.558 ] 00:24:42.558 } 00:24:42.558 ] 00:24:42.558 }' 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=745083 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 745083 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745083 ']' 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.558 05:13:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.817 [2024-07-13 05:13:49.076686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:42.817 [2024-07-13 05:13:49.076826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.817 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.817 [2024-07-13 05:13:49.205501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.075 [2024-07-13 05:13:49.429581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.075 [2024-07-13 05:13:49.429688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.075 [2024-07-13 05:13:49.429716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.075 [2024-07-13 05:13:49.429738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.075 [2024-07-13 05:13:49.429757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.075 [2024-07-13 05:13:49.429928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.642 [2024-07-13 05:13:49.930724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.642 [2024-07-13 05:13:49.962712] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.642 [2024-07-13 05:13:49.962988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.642 05:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=745236 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 745236 /var/tmp/bdevperf.sock 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745236 ']' 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.642 05:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:43.642 "subsystems": [ 00:24:43.642 { 00:24:43.642 "subsystem": "keyring", 00:24:43.642 "config": [ 00:24:43.642 { 00:24:43.642 "method": "keyring_file_add_key", 00:24:43.642 "params": { 00:24:43.642 "name": "key0", 00:24:43.642 "path": "/tmp/tmp.DRLi7OOOm4" 00:24:43.642 } 00:24:43.642 } 00:24:43.642 ] 00:24:43.642 }, 00:24:43.642 { 00:24:43.642 "subsystem": "iobuf", 00:24:43.642 "config": [ 00:24:43.642 { 00:24:43.642 "method": "iobuf_set_options", 00:24:43.642 "params": { 00:24:43.642 "small_pool_count": 8192, 00:24:43.642 "large_pool_count": 1024, 00:24:43.642 "small_bufsize": 8192, 00:24:43.642 "large_bufsize": 135168 00:24:43.642 } 00:24:43.642 } 00:24:43.642 ] 00:24:43.642 }, 00:24:43.642 { 00:24:43.642 "subsystem": "sock", 00:24:43.642 "config": [ 00:24:43.642 { 00:24:43.642 "method": "sock_set_default_impl", 00:24:43.642 "params": { 00:24:43.642 "impl_name": "posix" 00:24:43.642 } 00:24:43.642 }, 00:24:43.642 { 00:24:43.642 "method": "sock_impl_set_options", 00:24:43.642 "params": { 00:24:43.642 "impl_name": "ssl", 00:24:43.642 "recv_buf_size": 4096, 00:24:43.642 "send_buf_size": 4096, 00:24:43.642 "enable_recv_pipe": true, 00:24:43.643 "enable_quickack": false, 00:24:43.643 "enable_placement_id": 0, 00:24:43.643 "enable_zerocopy_send_server": true, 00:24:43.643 "enable_zerocopy_send_client": false, 00:24:43.643 "zerocopy_threshold": 0, 00:24:43.643 "tls_version": 0, 00:24:43.643 "enable_ktls": false 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "sock_impl_set_options", 00:24:43.643 "params": { 00:24:43.643 "impl_name": "posix", 00:24:43.643 "recv_buf_size": 2097152, 00:24:43.643 "send_buf_size": 2097152, 00:24:43.643 "enable_recv_pipe": true, 00:24:43.643 "enable_quickack": false, 00:24:43.643 "enable_placement_id": 0, 00:24:43.643 "enable_zerocopy_send_server": true, 00:24:43.643 "enable_zerocopy_send_client": false, 00:24:43.643 "zerocopy_threshold": 0, 00:24:43.643 "tls_version": 0, 00:24:43.643 "enable_ktls": false 00:24:43.643 } 00:24:43.643 } 00:24:43.643 ] 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "subsystem": "vmd", 00:24:43.643 "config": [] 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "subsystem": "accel", 00:24:43.643 "config": [ 00:24:43.643 { 00:24:43.643 "method": "accel_set_options", 00:24:43.643 "params": { 00:24:43.643 "small_cache_size": 128, 00:24:43.643 "large_cache_size": 16, 00:24:43.643 "task_count": 2048, 00:24:43.643 "sequence_count": 2048, 00:24:43.643 "buf_count": 2048 00:24:43.643 } 00:24:43.643 } 00:24:43.643 ] 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "subsystem": "bdev", 00:24:43.643 "config": [ 00:24:43.643 { 00:24:43.643 "method": "bdev_set_options", 00:24:43.643 "params": { 00:24:43.643 "bdev_io_pool_size": 65535, 00:24:43.643 "bdev_io_cache_size": 256, 00:24:43.643 "bdev_auto_examine": true, 00:24:43.643 "iobuf_small_cache_size": 128, 00:24:43.643 "iobuf_large_cache_size": 16 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_raid_set_options", 00:24:43.643 "params": { 00:24:43.643 "process_window_size_kb": 1024 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_iscsi_set_options", 00:24:43.643 "params": { 00:24:43.643 "timeout_sec": 30 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_nvme_set_options", 00:24:43.643 "params": { 00:24:43.643 "action_on_timeout": "none", 00:24:43.643 "timeout_us": 0, 00:24:43.643 "timeout_admin_us": 0, 00:24:43.643 "keep_alive_timeout_ms": 10000, 00:24:43.643 "arbitration_burst": 0, 00:24:43.643 "low_priority_weight": 0, 00:24:43.643 "medium_priority_weight": 0, 00:24:43.643 "high_priority_weight": 0, 00:24:43.643 "nvme_adminq_poll_period_us": 10000, 00:24:43.643 "nvme_ioq_poll_period_us": 0, 00:24:43.643 "io_queue_requests": 512, 00:24:43.643 "delay_cmd_submit": true, 00:24:43.643 "transport_retry_count": 4, 00:24:43.643 "bdev_retry_count": 3, 00:24:43.643 "transport_ack_timeout": 0, 00:24:43.643 "ctrlr_loss_timeout_sec": 0, 00:24:43.643 "reconnect_delay_sec": 0, 00:24:43.643 "fast_io_fail_timeout_sec": 0, 00:24:43.643 "disable_auto_failback": false, 00:24:43.643 "generate_uuids": false, 00:24:43.643 "transport_tos": 0, 00:24:43.643 "nvme_error_stat": false, 00:24:43.643 "rdma_srq_size": 0, 00:24:43.643 "io_path_stat": false, 00:24:43.643 "allow_accel_sequence": false, 00:24:43.643 "rdma_max_cq_size": 0, 00:24:43.643 "rdma_cm_event_timeout_ms": 0, 00:24:43.643 "dhchap_digests": [ 00:24:43.643 "sha256", 00:24:43.643 "sha384", 00:24:43.643 "sha512" 00:24:43.643 ], 00:24:43.643 "dhchap_dhgroups": [ 00:24:43.643 "null", 00:24:43.643 "ffdhe2048", 00:24:43.643 "ffdhe3072", 00:24:43.643 "ffdhe4096", 00:24:43.643 "ffdhe6144", 00:24:43.643 "ffdhe8192" 00:24:43.643 ] 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_nvme_attach_controller", 00:24:43.643 "params": { 00:24:43.643 "name": "nvme0", 00:24:43.643 "trtype": "TCP", 00:24:43.643 "adrfam": "IPv4", 00:24:43.643 "traddr": "10.0.0.2", 00:24:43.643 "trsvcid": "4420", 00:24:43.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.643 "prchk_reftag": false, 00:24:43.643 "prchk_guard": false, 00:24:43.643 "ctrlr_loss_timeout_sec": 0, 00:24:43.643 "reconnect_delay_sec": 0, 00:24:43.643 "fast_io_fail_timeout_sec": 0, 00:24:43.643 "psk": "key0", 00:24:43.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.643 "hdgst": false, 00:24:43.643 "ddgst": false 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_nvme_set_hotplug", 00:24:43.643 "params": { 00:24:43.643 "period_us": 100000, 00:24:43.643 "enable": false 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_enable_histogram", 00:24:43.643 "params": { 00:24:43.643 "name": "nvme0n1", 00:24:43.643 "enable": true 00:24:43.643 } 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "method": "bdev_wait_for_examine" 00:24:43.643 } 00:24:43.643 ] 00:24:43.643 }, 00:24:43.643 { 00:24:43.643 "subsystem": "nbd", 00:24:43.643 "config": [] 00:24:43.643 } 00:24:43.643 ] 00:24:43.643 }' 00:24:43.643 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.643 05:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.643 [2024-07-13 05:13:50.102114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:43.643 [2024-07-13 05:13:50.102293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745236 ] 00:24:43.902 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.902 [2024-07-13 05:13:50.234739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.160 [2024-07-13 05:13:50.486810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.728 [2024-07-13 05:13:50.922630] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.728 05:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.728 05:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:44.728 05:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.728 05:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:44.986 05:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.986 05:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.986 Running I/O for 1 seconds... 00:24:46.361 00:24:46.361 Latency(us) 00:24:46.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.361 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.361 Verification LBA range: start 0x0 length 0x2000 00:24:46.361 nvme0n1 : 1.04 2425.99 9.48 0.00 0.00 51929.40 8301.23 51263.72 00:24:46.361 =================================================================================================================== 00:24:46.361 Total : 2425.99 9.48 0.00 0.00 51929.40 8301.23 51263.72 00:24:46.361 0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:46.362 nvmf_trace.0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 745236 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745236 ']' 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745236 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745236 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745236' 00:24:46.362 killing process with pid 745236 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745236 00:24:46.362 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.362 00:24:46.362 Latency(us) 00:24:46.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.362 =================================================================================================================== 00:24:46.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.362 05:13:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745236 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.294 rmmod nvme_tcp 00:24:47.294 rmmod nvme_fabrics 00:24:47.294 rmmod nvme_keyring 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 745083 ']' 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 745083 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745083 ']' 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745083 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745083 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745083' 00:24:47.294 killing process with pid 745083 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745083 00:24:47.294 05:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745083 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.669 05:13:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.204 05:13:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:51.204 05:13:57 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3YRBYwMnVm /tmp/tmp.LwggZ5Ag4c /tmp/tmp.DRLi7OOOm4 00:24:51.204 00:24:51.204 real 1m50.094s 00:24:51.204 user 2m57.346s 00:24:51.204 sys 0m27.326s 00:24:51.204 05:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.204 05:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.204 ************************************ 00:24:51.204 END TEST nvmf_tls 00:24:51.204 ************************************ 00:24:51.204 05:13:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:51.204 05:13:57 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:51.204 05:13:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.204 05:13:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.204 05:13:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:51.204 ************************************ 00:24:51.204 START TEST nvmf_fips 00:24:51.204 ************************************ 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:51.204 * Looking for test storage... 00:24:51.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.204 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:51.205 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:51.206 Error setting digest 00:24:51.206 00227CED397F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:51.206 00227CED397F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.206 05:13:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:53.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:53.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:53.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:53.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:24:53.105 00:24:53.105 --- 10.0.0.2 ping statistics --- 00:24:53.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.105 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:24:53.105 00:24:53.105 --- 10.0.0.1 ping statistics --- 00:24:53.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.105 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=747740 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 747740 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 747740 ']' 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.105 05:13:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.363 [2024-07-13 05:13:59.640655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:53.363 [2024-07-13 05:13:59.640788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.363 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.363 [2024-07-13 05:13:59.779461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.621 [2024-07-13 05:14:00.045064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.621 [2024-07-13 05:14:00.045156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.621 [2024-07-13 05:14:00.045189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.621 [2024-07-13 05:14:00.045210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.621 [2024-07-13 05:14:00.045231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.621 [2024-07-13 05:14:00.045287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:54.186 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:54.187 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.445 [2024-07-13 05:14:00.818007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.445 [2024-07-13 05:14:00.833975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.445 [2024-07-13 05:14:00.834242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.445 [2024-07-13 05:14:00.909585] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:54.445 malloc0 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=747895 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 747895 /var/tmp/bdevperf.sock 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 747895 ']' 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.445 05:14:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 [2024-07-13 05:14:01.042031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:54.703 [2024-07-13 05:14:01.042179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747895 ] 00:24:54.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.703 [2024-07-13 05:14:01.164967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.961 [2024-07-13 05:14:01.391644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.526 05:14:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.526 05:14:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:55.526 05:14:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:55.784 [2024-07-13 05:14:02.169560] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.784 [2024-07-13 05:14:02.169744] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:55.784 TLSTESTn1 00:24:55.784 05:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.041 Running I/O for 10 seconds... 00:25:06.007 00:25:06.007 Latency(us) 00:25:06.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.007 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.007 Verification LBA range: start 0x0 length 0x2000 00:25:06.007 TLSTESTn1 : 10.04 2584.68 10.10 0.00 0.00 49403.76 12621.75 56312.41 00:25:06.007 =================================================================================================================== 00:25:06.007 Total : 2584.68 10.10 0.00 0.00 49403.76 12621.75 56312.41 00:25:06.007 0 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:06.007 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:06.007 nvmf_trace.0 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 747895 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 747895 ']' 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 747895 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 747895 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 747895' 00:25:06.265 killing process with pid 747895 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 747895 00:25:06.265 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.265 00:25:06.265 Latency(us) 00:25:06.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.265 =================================================================================================================== 00:25:06.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.265 [2024-07-13 05:14:12.582627] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:06.265 05:14:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 747895 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.200 rmmod nvme_tcp 00:25:07.200 rmmod nvme_fabrics 00:25:07.200 rmmod nvme_keyring 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 747740 ']' 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 747740 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 747740 ']' 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 747740 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 747740 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 747740' 00:25:07.200 killing process with pid 747740 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 747740 00:25:07.200 [2024-07-13 05:14:13.660018] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:07.200 05:14:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 747740 00:25:08.578 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.578 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.578 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.578 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.578 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.579 05:14:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.579 05:14:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.579 05:14:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:11.107 00:25:11.107 real 0m19.921s 00:25:11.107 user 0m26.973s 00:25:11.107 sys 0m5.174s 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:11.107 ************************************ 00:25:11.107 END TEST nvmf_fips 00:25:11.107 ************************************ 00:25:11.107 05:14:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:11.107 05:14:17 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:11.107 05:14:17 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.107 05:14:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.107 05:14:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.107 05:14:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.107 ************************************ 00:25:11.107 START TEST nvmf_fuzz 00:25:11.107 ************************************ 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.107 * Looking for test storage... 00:25:11.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.107 05:14:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.108 05:14:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.010 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:25:13.011 00:25:13.011 --- 10.0.0.2 ping statistics --- 00:25:13.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.011 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:13.011 00:25:13.011 --- 10.0.0.1 ping statistics --- 00:25:13.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.011 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=751404 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 751404 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 751404 ']' 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.011 05:14:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.944 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:13.945 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.945 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.945 Malloc0 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:14.203 05:14:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:46.263 Fuzzing completed. Shutting down the fuzz application 00:25:46.263 00:25:46.263 Dumping successful admin opcodes: 00:25:46.263 8, 9, 10, 24, 00:25:46.263 Dumping successful io opcodes: 00:25:46.263 0, 9, 00:25:46.263 NS: 0x200003aefec0 I/O qp, Total commands completed: 328610, total successful commands: 1945, random_seed: 2051626688 00:25:46.263 NS: 0x200003aefec0 admin qp, Total commands completed: 41392, total successful commands: 337, random_seed: 115078528 00:25:46.263 05:14:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:47.196 Fuzzing completed. Shutting down the fuzz application 00:25:47.196 00:25:47.196 Dumping successful admin opcodes: 00:25:47.196 24, 00:25:47.196 Dumping successful io opcodes: 00:25:47.196 00:25:47.196 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2127855973 00:25:47.196 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2128050233 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.196 rmmod nvme_tcp 00:25:47.196 rmmod nvme_fabrics 00:25:47.196 rmmod nvme_keyring 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 751404 ']' 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 751404 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 751404 ']' 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 751404 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 751404 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 751404' 00:25:47.196 killing process with pid 751404 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 751404 00:25:47.196 05:14:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 751404 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.097 05:14:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.007 05:14:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.007 05:14:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:51.007 00:25:51.007 real 0m40.011s 00:25:51.007 user 0m57.441s 00:25:51.007 sys 0m13.694s 00:25:51.007 05:14:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.007 05:14:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.007 ************************************ 00:25:51.007 END TEST nvmf_fuzz 00:25:51.007 ************************************ 00:25:51.007 05:14:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:51.007 05:14:57 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:51.007 05:14:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:51.007 05:14:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.007 05:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.007 ************************************ 00:25:51.007 START TEST nvmf_multiconnection 00:25:51.007 ************************************ 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:51.007 * Looking for test storage... 00:25:51.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.007 05:14:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:52.911 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:52.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:52.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:52.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:52.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:52.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:25:52.912 00:25:52.912 --- 10.0.0.2 ping statistics --- 00:25:52.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.912 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:25:52.912 00:25:52.912 --- 10.0.0.1 ping statistics --- 00:25:52.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.912 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=757394 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 757394 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 757394 ']' 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.912 05:14:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.912 [2024-07-13 05:14:59.267398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:52.912 [2024-07-13 05:14:59.267545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.912 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.912 [2024-07-13 05:14:59.398090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.171 [2024-07-13 05:14:59.659628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.171 [2024-07-13 05:14:59.659695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.171 [2024-07-13 05:14:59.659733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.171 [2024-07-13 05:14:59.659756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.171 [2024-07-13 05:14:59.659785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.171 [2024-07-13 05:14:59.659934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.171 [2024-07-13 05:14:59.659964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.171 [2024-07-13 05:14:59.660031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.171 [2024-07-13 05:14:59.660039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.738 [2024-07-13 05:15:00.219661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.738 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 Malloc1 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 [2024-07-13 05:15:00.329467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 Malloc2 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.998 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 Malloc3 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 Malloc4 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 Malloc5 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.257 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.258 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 Malloc6 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 Malloc7 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.516 05:15:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:54.517 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 05:15:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.517 Malloc8 00:25:54.517 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.517 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:54.517 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.517 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.775 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 Malloc9 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 Malloc10 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.776 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.035 Malloc11 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.035 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:55.601 05:15:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:55.601 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:55.601 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.601 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:55.601 05:15:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.500 05:15:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:58.434 05:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:58.434 05:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:58.434 05:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.434 05:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:58.434 05:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.330 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.331 05:15:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:01.265 05:15:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:01.265 05:15:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.265 05:15:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.265 05:15:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.265 05:15:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.162 05:15:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:03.728 05:15:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:03.728 05:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:03.728 05:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.728 05:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:03.728 05:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.638 05:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:06.622 05:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:06.622 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:06.622 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.622 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:06.622 05:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.515 05:15:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:09.080 05:15:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:09.080 05:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.080 05:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.080 05:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.080 05:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.609 05:15:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:12.175 05:15:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:12.175 05:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:12.175 05:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.175 05:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:12.175 05:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.076 05:15:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:15.008 05:15:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:15.008 05:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.008 05:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.008 05:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.008 05:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.903 05:15:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:17.833 05:15:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:17.833 05:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:17.833 05:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.833 05:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:17.833 05:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.358 05:15:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:20.924 05:15:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:20.924 05:15:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:20.924 05:15:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.924 05:15:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:20.924 05:15:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.830 05:15:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:23.764 05:15:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:23.764 05:15:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:23.764 05:15:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:23.764 05:15:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:23.764 05:15:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:26.288 05:15:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:26.288 [global] 00:26:26.288 thread=1 00:26:26.288 invalidate=1 00:26:26.288 rw=read 00:26:26.288 time_based=1 00:26:26.288 runtime=10 00:26:26.288 ioengine=libaio 00:26:26.288 direct=1 00:26:26.288 bs=262144 00:26:26.288 iodepth=64 00:26:26.288 norandommap=1 00:26:26.288 numjobs=1 00:26:26.288 00:26:26.288 [job0] 00:26:26.288 filename=/dev/nvme0n1 00:26:26.288 [job1] 00:26:26.288 filename=/dev/nvme10n1 00:26:26.288 [job2] 00:26:26.288 filename=/dev/nvme1n1 00:26:26.288 [job3] 00:26:26.288 filename=/dev/nvme2n1 00:26:26.288 [job4] 00:26:26.288 filename=/dev/nvme3n1 00:26:26.288 [job5] 00:26:26.288 filename=/dev/nvme4n1 00:26:26.288 [job6] 00:26:26.288 filename=/dev/nvme5n1 00:26:26.288 [job7] 00:26:26.288 filename=/dev/nvme6n1 00:26:26.288 [job8] 00:26:26.288 filename=/dev/nvme7n1 00:26:26.288 [job9] 00:26:26.288 filename=/dev/nvme8n1 00:26:26.288 [job10] 00:26:26.288 filename=/dev/nvme9n1 00:26:26.288 Could not set queue depth (nvme0n1) 00:26:26.288 Could not set queue depth (nvme10n1) 00:26:26.288 Could not set queue depth (nvme1n1) 00:26:26.288 Could not set queue depth (nvme2n1) 00:26:26.288 Could not set queue depth (nvme3n1) 00:26:26.288 Could not set queue depth (nvme4n1) 00:26:26.288 Could not set queue depth (nvme5n1) 00:26:26.288 Could not set queue depth (nvme6n1) 00:26:26.288 Could not set queue depth (nvme7n1) 00:26:26.288 Could not set queue depth (nvme8n1) 00:26:26.288 Could not set queue depth (nvme9n1) 00:26:26.288 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:26.288 fio-3.35 00:26:26.288 Starting 11 threads 00:26:38.484 00:26:38.484 job0: (groupid=0, jobs=1): err= 0: pid=762393: Sat Jul 13 05:15:43 2024 00:26:38.484 read: IOPS=570, BW=143MiB/s (150MB/s)(1449MiB/10151msec) 00:26:38.484 slat (usec): min=9, max=83325, avg=839.38, stdev=4191.32 00:26:38.484 clat (msec): min=7, max=316, avg=111.14, stdev=49.84 00:26:38.485 lat (msec): min=7, max=316, avg=111.98, stdev=50.39 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 54], 20.00th=[ 77], 00:26:38.485 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 112], 00:26:38.485 | 70.00th=[ 124], 80.00th=[ 155], 90.00th=[ 188], 95.00th=[ 211], 00:26:38.485 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 264], 99.95th=[ 309], 00:26:38.485 | 99.99th=[ 317] 00:26:38.485 bw ( KiB/s): min=66560, max=206336, per=8.93%, avg=146737.75, stdev=38123.75, samples=20 00:26:38.485 iops : min= 260, max= 806, avg=573.15, stdev=148.92, samples=20 00:26:38.485 lat (msec) : 10=0.17%, 20=0.95%, 50=8.06%, 100=37.09%, 250=53.47% 00:26:38.485 lat (msec) : 500=0.26% 00:26:38.485 cpu : usr=0.12%, sys=1.59%, ctx=1238, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=5796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job1: (groupid=0, jobs=1): err= 0: pid=762395: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=691, BW=173MiB/s (181MB/s)(1750MiB/10117msec) 00:26:38.485 slat (usec): min=10, max=79515, avg=1218.38, stdev=3923.04 00:26:38.485 clat (msec): min=7, max=272, avg=91.21, stdev=32.70 00:26:38.485 lat (msec): min=8, max=272, avg=92.43, stdev=32.82 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 42], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 66], 00:26:38.485 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 92], 00:26:38.485 | 70.00th=[ 102], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 157], 00:26:38.485 | 99.00th=[ 197], 99.50th=[ 228], 99.90th=[ 271], 99.95th=[ 271], 00:26:38.485 | 99.99th=[ 271] 00:26:38.485 bw ( KiB/s): min=114176, max=263680, per=10.81%, avg=177529.05, stdev=42359.75, samples=20 00:26:38.485 iops : min= 446, max= 1030, avg=693.45, stdev=165.46, samples=20 00:26:38.485 lat (msec) : 10=0.07%, 20=0.17%, 50=3.59%, 100=65.31%, 250=30.40% 00:26:38.485 lat (msec) : 500=0.46% 00:26:38.485 cpu : usr=0.34%, sys=2.14%, ctx=1297, majf=0, minf=3721 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=6999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job2: (groupid=0, jobs=1): err= 0: pid=762400: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=428, BW=107MiB/s (112MB/s)(1083MiB/10112msec) 00:26:38.485 slat (usec): min=13, max=63759, avg=2248.06, stdev=6090.52 00:26:38.485 clat (msec): min=49, max=278, avg=147.02, stdev=37.80 00:26:38.485 lat (msec): min=49, max=283, avg=149.27, stdev=38.56 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 86], 5.00th=[ 96], 10.00th=[ 102], 20.00th=[ 110], 00:26:38.485 | 30.00th=[ 123], 40.00th=[ 134], 50.00th=[ 146], 60.00th=[ 157], 00:26:38.485 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 199], 95.00th=[ 213], 00:26:38.485 | 99.00th=[ 251], 99.50th=[ 257], 99.90th=[ 275], 99.95th=[ 275], 00:26:38.485 | 99.99th=[ 279] 00:26:38.485 bw ( KiB/s): min=66560, max=151040, per=6.65%, avg=109274.05, stdev=23688.44, samples=20 00:26:38.485 iops : min= 260, max= 590, avg=426.80, stdev=92.44, samples=20 00:26:38.485 lat (msec) : 50=0.18%, 100=8.70%, 250=90.19%, 500=0.92% 00:26:38.485 cpu : usr=0.21%, sys=1.56%, ctx=862, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=4332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job3: (groupid=0, jobs=1): err= 0: pid=762412: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=476, BW=119MiB/s (125MB/s)(1204MiB/10112msec) 00:26:38.485 slat (usec): min=9, max=111896, avg=1508.07, stdev=5479.18 00:26:38.485 clat (msec): min=20, max=263, avg=132.81, stdev=43.20 00:26:38.485 lat (msec): min=20, max=355, avg=134.32, stdev=43.77 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 40], 5.00th=[ 63], 10.00th=[ 81], 20.00th=[ 100], 00:26:38.485 | 30.00th=[ 110], 40.00th=[ 120], 50.00th=[ 129], 60.00th=[ 138], 00:26:38.485 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 194], 95.00th=[ 211], 00:26:38.485 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 257], 99.95th=[ 262], 00:26:38.485 | 99.99th=[ 264] 00:26:38.485 bw ( KiB/s): min=92160, max=158914, per=7.40%, avg=121595.30, stdev=20192.15, samples=20 00:26:38.485 iops : min= 360, max= 620, avg=474.90, stdev=78.75, samples=20 00:26:38.485 lat (msec) : 50=2.26%, 100=18.20%, 250=79.27%, 500=0.27% 00:26:38.485 cpu : usr=0.30%, sys=1.47%, ctx=1099, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job4: (groupid=0, jobs=1): err= 0: pid=762417: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=456, BW=114MiB/s (120MB/s)(1156MiB/10115msec) 00:26:38.485 slat (usec): min=9, max=93168, avg=1214.25, stdev=5243.86 00:26:38.485 clat (msec): min=3, max=276, avg=138.73, stdev=45.26 00:26:38.485 lat (msec): min=3, max=276, avg=139.94, stdev=46.10 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 26], 5.00th=[ 65], 10.00th=[ 87], 20.00th=[ 103], 00:26:38.485 | 30.00th=[ 114], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 150], 00:26:38.485 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 201], 95.00th=[ 213], 00:26:38.485 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 257], 99.95th=[ 266], 00:26:38.485 | 99.99th=[ 275] 00:26:38.485 bw ( KiB/s): min=76288, max=163840, per=7.10%, avg=116684.00, stdev=20055.11, samples=20 00:26:38.485 iops : min= 298, max= 640, avg=455.75, stdev=78.28, samples=20 00:26:38.485 lat (msec) : 4=0.02%, 10=0.24%, 20=0.54%, 50=2.44%, 100=14.91% 00:26:38.485 lat (msec) : 250=81.54%, 500=0.30% 00:26:38.485 cpu : usr=0.17%, sys=1.31%, ctx=1054, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=4622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job5: (groupid=0, jobs=1): err= 0: pid=762443: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=784, BW=196MiB/s (206MB/s)(1968MiB/10027msec) 00:26:38.485 slat (usec): min=10, max=60465, avg=1251.68, stdev=4273.07 00:26:38.485 clat (msec): min=10, max=256, avg=80.21, stdev=36.87 00:26:38.485 lat (msec): min=10, max=260, avg=81.47, stdev=37.41 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:26:38.485 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 79], 00:26:38.485 | 70.00th=[ 89], 80.00th=[ 102], 90.00th=[ 127], 95.00th=[ 157], 00:26:38.485 | 99.00th=[ 213], 99.50th=[ 220], 99.90th=[ 234], 99.95th=[ 257], 00:26:38.485 | 99.99th=[ 257] 00:26:38.485 bw ( KiB/s): min=76135, max=325120, per=12.17%, avg=199838.55, stdev=61649.79, samples=20 00:26:38.485 iops : min= 297, max= 1270, avg=780.55, stdev=240.92, samples=20 00:26:38.485 lat (msec) : 20=0.14%, 50=16.83%, 100=62.05%, 250=20.90%, 500=0.08% 00:26:38.485 cpu : usr=0.30%, sys=2.40%, ctx=1041, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=7871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job6: (groupid=0, jobs=1): err= 0: pid=762466: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=681, BW=170MiB/s (179MB/s)(1732MiB/10159msec) 00:26:38.485 slat (usec): min=9, max=66663, avg=1081.47, stdev=3714.79 00:26:38.485 clat (msec): min=9, max=277, avg=92.67, stdev=33.60 00:26:38.485 lat (msec): min=9, max=277, avg=93.75, stdev=33.82 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 63], 20.00th=[ 71], 00:26:38.485 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 92], 00:26:38.485 | 70.00th=[ 105], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 148], 00:26:38.485 | 99.00th=[ 197], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 275], 00:26:38.485 | 99.99th=[ 279] 00:26:38.485 bw ( KiB/s): min=124416, max=209920, per=10.70%, avg=175709.95, stdev=27428.59, samples=20 00:26:38.485 iops : min= 486, max= 820, avg=686.30, stdev=107.17, samples=20 00:26:38.485 lat (msec) : 10=0.01%, 20=0.58%, 50=5.08%, 100=61.82%, 250=32.02% 00:26:38.485 lat (msec) : 500=0.49% 00:26:38.485 cpu : usr=0.33%, sys=2.10%, ctx=1304, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=6928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job7: (groupid=0, jobs=1): err= 0: pid=762484: Sat Jul 13 05:15:43 2024 00:26:38.485 read: IOPS=443, BW=111MiB/s (116MB/s)(1122MiB/10118msec) 00:26:38.485 slat (usec): min=13, max=96142, avg=1794.90, stdev=6098.17 00:26:38.485 clat (msec): min=13, max=291, avg=142.42, stdev=47.52 00:26:38.485 lat (msec): min=13, max=299, avg=144.21, stdev=48.45 00:26:38.485 clat percentiles (msec): 00:26:38.485 | 1.00th=[ 28], 5.00th=[ 56], 10.00th=[ 84], 20.00th=[ 104], 00:26:38.485 | 30.00th=[ 117], 40.00th=[ 133], 50.00th=[ 144], 60.00th=[ 157], 00:26:38.485 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 220], 00:26:38.485 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 284], 99.95th=[ 292], 00:26:38.485 | 99.99th=[ 292] 00:26:38.485 bw ( KiB/s): min=67072, max=185856, per=6.89%, avg=113181.05, stdev=27480.15, samples=20 00:26:38.485 iops : min= 262, max= 726, avg=442.10, stdev=107.35, samples=20 00:26:38.485 lat (msec) : 20=0.47%, 50=3.30%, 100=14.13%, 250=81.19%, 500=0.91% 00:26:38.485 cpu : usr=0.25%, sys=1.52%, ctx=951, majf=0, minf=4097 00:26:38.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:38.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.485 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.485 job8: (groupid=0, jobs=1): err= 0: pid=762537: Sat Jul 13 05:15:43 2024 00:26:38.486 read: IOPS=953, BW=238MiB/s (250MB/s)(2388MiB/10018msec) 00:26:38.486 slat (usec): min=11, max=44560, avg=1009.08, stdev=2946.68 00:26:38.486 clat (msec): min=7, max=257, avg=66.08, stdev=26.42 00:26:38.486 lat (msec): min=7, max=257, avg=67.09, stdev=26.71 00:26:38.486 clat percentiles (msec): 00:26:38.486 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 43], 00:26:38.486 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 63], 60.00th=[ 70], 00:26:38.486 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 110], 00:26:38.486 | 99.00th=[ 129], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 251], 00:26:38.486 | 99.99th=[ 257] 00:26:38.486 bw ( KiB/s): min=141824, max=389365, per=14.78%, avg=242798.60, stdev=71462.73, samples=20 00:26:38.486 iops : min= 554, max= 1520, avg=948.35, stdev=279.02, samples=20 00:26:38.486 lat (msec) : 10=0.03%, 20=0.18%, 50=33.52%, 100=55.73%, 250=10.48% 00:26:38.486 lat (msec) : 500=0.06% 00:26:38.486 cpu : usr=0.56%, sys=2.97%, ctx=1537, majf=0, minf=4097 00:26:38.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:38.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.486 issued rwts: total=9550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.486 job9: (groupid=0, jobs=1): err= 0: pid=762550: Sat Jul 13 05:15:43 2024 00:26:38.486 read: IOPS=522, BW=131MiB/s (137MB/s)(1322MiB/10115msec) 00:26:38.486 slat (usec): min=9, max=41668, avg=1248.63, stdev=4143.84 00:26:38.486 clat (msec): min=13, max=263, avg=121.13, stdev=38.90 00:26:38.486 lat (msec): min=13, max=265, avg=122.38, stdev=39.40 00:26:38.486 clat percentiles (msec): 00:26:38.486 | 1.00th=[ 34], 5.00th=[ 62], 10.00th=[ 78], 20.00th=[ 93], 00:26:38.486 | 30.00th=[ 102], 40.00th=[ 109], 50.00th=[ 116], 60.00th=[ 125], 00:26:38.486 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 176], 95.00th=[ 197], 00:26:38.486 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 243], 99.95th=[ 264], 00:26:38.486 | 99.99th=[ 264] 00:26:38.486 bw ( KiB/s): min=73069, max=166912, per=8.14%, avg=133663.45, stdev=25771.99, samples=20 00:26:38.486 iops : min= 285, max= 652, avg=522.10, stdev=100.73, samples=20 00:26:38.486 lat (msec) : 20=0.11%, 50=2.59%, 100=25.75%, 250=71.49%, 500=0.06% 00:26:38.486 cpu : usr=0.27%, sys=1.50%, ctx=1202, majf=0, minf=4097 00:26:38.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:38.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.486 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.486 job10: (groupid=0, jobs=1): err= 0: pid=762552: Sat Jul 13 05:15:43 2024 00:26:38.486 read: IOPS=444, BW=111MiB/s (116MB/s)(1124MiB/10115msec) 00:26:38.486 slat (usec): min=9, max=119381, avg=1335.39, stdev=5383.94 00:26:38.486 clat (msec): min=5, max=289, avg=142.56, stdev=50.12 00:26:38.486 lat (msec): min=5, max=309, avg=143.90, stdev=50.86 00:26:38.486 clat percentiles (msec): 00:26:38.486 | 1.00th=[ 27], 5.00th=[ 53], 10.00th=[ 71], 20.00th=[ 103], 00:26:38.486 | 30.00th=[ 124], 40.00th=[ 136], 50.00th=[ 146], 60.00th=[ 157], 00:26:38.486 | 70.00th=[ 167], 80.00th=[ 184], 90.00th=[ 205], 95.00th=[ 222], 00:26:38.486 | 99.00th=[ 249], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 288], 00:26:38.486 | 99.99th=[ 292] 00:26:38.486 bw ( KiB/s): min=73216, max=159232, per=6.91%, avg=113418.25, stdev=24866.44, samples=20 00:26:38.486 iops : min= 286, max= 622, avg=443.00, stdev=97.17, samples=20 00:26:38.486 lat (msec) : 10=0.16%, 20=0.51%, 50=4.09%, 100=14.59%, 250=79.76% 00:26:38.486 lat (msec) : 500=0.89% 00:26:38.486 cpu : usr=0.29%, sys=1.23%, ctx=1103, majf=0, minf=4097 00:26:38.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:38.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:38.486 issued rwts: total=4495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.486 00:26:38.486 Run status group 0 (all jobs): 00:26:38.486 READ: bw=1604MiB/s (1682MB/s), 107MiB/s-238MiB/s (112MB/s-250MB/s), io=15.9GiB (17.1GB), run=10018-10159msec 00:26:38.486 00:26:38.486 Disk stats (read/write): 00:26:38.486 nvme0n1: ios=11360/0, merge=0/0, ticks=1242025/0, in_queue=1242025, util=97.03% 00:26:38.486 nvme10n1: ios=13830/0, merge=0/0, ticks=1235927/0, in_queue=1235927, util=97.25% 00:26:38.486 nvme1n1: ios=8495/0, merge=0/0, ticks=1230218/0, in_queue=1230218, util=97.51% 00:26:38.486 nvme2n1: ios=9422/0, merge=0/0, ticks=1236977/0, in_queue=1236977, util=97.69% 00:26:38.486 nvme3n1: ios=9027/0, merge=0/0, ticks=1236462/0, in_queue=1236462, util=97.77% 00:26:38.486 nvme4n1: ios=15425/0, merge=0/0, ticks=1238916/0, in_queue=1238916, util=98.14% 00:26:38.486 nvme5n1: ios=13683/0, merge=0/0, ticks=1240948/0, in_queue=1240948, util=98.32% 00:26:38.486 nvme6n1: ios=8771/0, merge=0/0, ticks=1229725/0, in_queue=1229725, util=98.45% 00:26:38.486 nvme7n1: ios=18835/0, merge=0/0, ticks=1239842/0, in_queue=1239842, util=98.90% 00:26:38.486 nvme8n1: ios=10389/0, merge=0/0, ticks=1238843/0, in_queue=1238843, util=99.10% 00:26:38.486 nvme9n1: ios=8793/0, merge=0/0, ticks=1236330/0, in_queue=1236330, util=99.23% 00:26:38.486 05:15:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:38.486 [global] 00:26:38.486 thread=1 00:26:38.486 invalidate=1 00:26:38.486 rw=randwrite 00:26:38.486 time_based=1 00:26:38.486 runtime=10 00:26:38.486 ioengine=libaio 00:26:38.486 direct=1 00:26:38.486 bs=262144 00:26:38.486 iodepth=64 00:26:38.486 norandommap=1 00:26:38.486 numjobs=1 00:26:38.486 00:26:38.486 [job0] 00:26:38.486 filename=/dev/nvme0n1 00:26:38.486 [job1] 00:26:38.486 filename=/dev/nvme10n1 00:26:38.486 [job2] 00:26:38.486 filename=/dev/nvme1n1 00:26:38.486 [job3] 00:26:38.486 filename=/dev/nvme2n1 00:26:38.486 [job4] 00:26:38.486 filename=/dev/nvme3n1 00:26:38.486 [job5] 00:26:38.486 filename=/dev/nvme4n1 00:26:38.486 [job6] 00:26:38.486 filename=/dev/nvme5n1 00:26:38.486 [job7] 00:26:38.486 filename=/dev/nvme6n1 00:26:38.486 [job8] 00:26:38.486 filename=/dev/nvme7n1 00:26:38.486 [job9] 00:26:38.486 filename=/dev/nvme8n1 00:26:38.486 [job10] 00:26:38.486 filename=/dev/nvme9n1 00:26:38.486 Could not set queue depth (nvme0n1) 00:26:38.486 Could not set queue depth (nvme10n1) 00:26:38.486 Could not set queue depth (nvme1n1) 00:26:38.486 Could not set queue depth (nvme2n1) 00:26:38.486 Could not set queue depth (nvme3n1) 00:26:38.486 Could not set queue depth (nvme4n1) 00:26:38.486 Could not set queue depth (nvme5n1) 00:26:38.486 Could not set queue depth (nvme6n1) 00:26:38.486 Could not set queue depth (nvme7n1) 00:26:38.486 Could not set queue depth (nvme8n1) 00:26:38.486 Could not set queue depth (nvme9n1) 00:26:38.486 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:38.486 fio-3.35 00:26:38.486 Starting 11 threads 00:26:48.461 00:26:48.461 job0: (groupid=0, jobs=1): err= 0: pid=763565: Sat Jul 13 05:15:54 2024 00:26:48.461 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(885MiB/10252msec); 0 zone resets 00:26:48.461 slat (usec): min=22, max=158742, avg=2236.00, stdev=7466.62 00:26:48.461 clat (msec): min=4, max=552, avg=182.98, stdev=123.58 00:26:48.461 lat (msec): min=4, max=552, avg=185.22, stdev=125.25 00:26:48.461 clat percentiles (msec): 00:26:48.461 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 56], 20.00th=[ 86], 00:26:48.461 | 30.00th=[ 100], 40.00th=[ 104], 50.00th=[ 125], 60.00th=[ 174], 00:26:48.461 | 70.00th=[ 266], 80.00th=[ 313], 90.00th=[ 368], 95.00th=[ 418], 00:26:48.461 | 99.00th=[ 464], 99.50th=[ 477], 99.90th=[ 535], 99.95th=[ 550], 00:26:48.461 | 99.99th=[ 550] 00:26:48.461 bw ( KiB/s): min=34816, max=205312, per=8.28%, avg=88998.15, stdev=52160.98, samples=20 00:26:48.461 iops : min= 136, max= 802, avg=347.60, stdev=203.72, samples=20 00:26:48.461 lat (msec) : 10=0.37%, 20=1.81%, 50=5.51%, 100=28.31%, 250=29.89% 00:26:48.461 lat (msec) : 500=33.81%, 750=0.31% 00:26:48.461 cpu : usr=1.00%, sys=1.15%, ctx=1826, majf=0, minf=1 00:26:48.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:48.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.461 issued rwts: total=0,3540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.461 job1: (groupid=0, jobs=1): err= 0: pid=763577: Sat Jul 13 05:15:54 2024 00:26:48.461 write: IOPS=451, BW=113MiB/s (118MB/s)(1156MiB/10249msec); 0 zone resets 00:26:48.461 slat (usec): min=18, max=243079, avg=1713.68, stdev=7286.92 00:26:48.461 clat (msec): min=2, max=706, avg=139.97, stdev=110.62 00:26:48.462 lat (msec): min=2, max=707, avg=141.68, stdev=112.11 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 65], 00:26:48.462 | 30.00th=[ 78], 40.00th=[ 90], 50.00th=[ 100], 60.00th=[ 110], 00:26:48.462 | 70.00th=[ 144], 80.00th=[ 247], 90.00th=[ 326], 95.00th=[ 363], 00:26:48.462 | 99.00th=[ 468], 99.50th=[ 498], 99.90th=[ 592], 99.95th=[ 617], 00:26:48.462 | 99.99th=[ 709] 00:26:48.462 bw ( KiB/s): min=40960, max=229888, per=10.87%, avg=116800.05, stdev=65306.23, samples=20 00:26:48.462 iops : min= 160, max= 898, avg=456.25, stdev=255.10, samples=20 00:26:48.462 lat (msec) : 4=0.22%, 10=1.82%, 20=3.11%, 50=10.46%, 100=36.89% 00:26:48.462 lat (msec) : 250=28.15%, 500=18.90%, 750=0.45% 00:26:48.462 cpu : usr=1.32%, sys=1.42%, ctx=2456, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,4625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job2: (groupid=0, jobs=1): err= 0: pid=763578: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=241, BW=60.3MiB/s (63.2MB/s)(618MiB/10247msec); 0 zone resets 00:26:48.462 slat (usec): min=24, max=116714, avg=3679.31, stdev=8154.67 00:26:48.462 clat (msec): min=5, max=634, avg=261.65, stdev=88.74 00:26:48.462 lat (msec): min=7, max=634, avg=265.33, stdev=89.77 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 23], 5.00th=[ 94], 10.00th=[ 142], 20.00th=[ 201], 00:26:48.462 | 30.00th=[ 232], 40.00th=[ 251], 50.00th=[ 271], 60.00th=[ 288], 00:26:48.462 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 376], 00:26:48.462 | 99.00th=[ 527], 99.50th=[ 575], 99.90th=[ 634], 99.95th=[ 634], 00:26:48.462 | 99.99th=[ 634] 00:26:48.462 bw ( KiB/s): min=43008, max=96768, per=5.73%, avg=61601.30, stdev=14457.11, samples=20 00:26:48.462 iops : min= 168, max= 378, avg=240.60, stdev=56.44, samples=20 00:26:48.462 lat (msec) : 10=0.12%, 20=0.69%, 50=1.66%, 100=2.91%, 250=34.70% 00:26:48.462 lat (msec) : 500=58.66%, 750=1.26% 00:26:48.462 cpu : usr=0.83%, sys=0.76%, ctx=967, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.4% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,2470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job3: (groupid=0, jobs=1): err= 0: pid=763579: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=464, BW=116MiB/s (122MB/s)(1174MiB/10112msec); 0 zone resets 00:26:48.462 slat (usec): min=18, max=100074, avg=1808.39, stdev=4108.05 00:26:48.462 clat (msec): min=3, max=319, avg=135.99, stdev=58.94 00:26:48.462 lat (msec): min=3, max=319, avg=137.80, stdev=59.61 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 59], 20.00th=[ 94], 00:26:48.462 | 30.00th=[ 100], 40.00th=[ 113], 50.00th=[ 131], 60.00th=[ 144], 00:26:48.462 | 70.00th=[ 171], 80.00th=[ 192], 90.00th=[ 220], 95.00th=[ 239], 00:26:48.462 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 313], 99.95th=[ 317], 00:26:48.462 | 99.99th=[ 321] 00:26:48.462 bw ( KiB/s): min=71680, max=169984, per=11.04%, avg=118564.25, stdev=31050.91, samples=20 00:26:48.462 iops : min= 280, max= 664, avg=463.10, stdev=121.31, samples=20 00:26:48.462 lat (msec) : 4=0.09%, 10=0.47%, 20=1.24%, 50=4.41%, 100=24.93% 00:26:48.462 lat (msec) : 250=66.60%, 500=2.28% 00:26:48.462 cpu : usr=1.44%, sys=1.71%, ctx=1940, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,4694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job4: (groupid=0, jobs=1): err= 0: pid=763580: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=299, BW=74.9MiB/s (78.6MB/s)(768MiB/10247msec); 0 zone resets 00:26:48.462 slat (usec): min=23, max=130160, avg=2830.50, stdev=7461.54 00:26:48.462 clat (msec): min=2, max=551, avg=210.50, stdev=114.14 00:26:48.462 lat (msec): min=2, max=551, avg=213.33, stdev=115.73 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 58], 20.00th=[ 84], 00:26:48.462 | 30.00th=[ 125], 40.00th=[ 163], 50.00th=[ 247], 60.00th=[ 268], 00:26:48.462 | 70.00th=[ 292], 80.00th=[ 321], 90.00th=[ 347], 95.00th=[ 363], 00:26:48.462 | 99.00th=[ 418], 99.50th=[ 477], 99.90th=[ 535], 99.95th=[ 550], 00:26:48.462 | 99.99th=[ 550] 00:26:48.462 bw ( KiB/s): min=40960, max=206749, per=7.17%, avg=76999.85, stdev=38955.72, samples=20 00:26:48.462 iops : min= 160, max= 807, avg=300.75, stdev=152.06, samples=20 00:26:48.462 lat (msec) : 4=0.16%, 10=1.24%, 20=1.73%, 50=4.56%, 100=16.93% 00:26:48.462 lat (msec) : 250=28.00%, 500=47.05%, 750=0.33% 00:26:48.462 cpu : usr=0.93%, sys=0.96%, ctx=1356, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,3071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job5: (groupid=0, jobs=1): err= 0: pid=763581: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=426, BW=107MiB/s (112MB/s)(1082MiB/10134msec); 0 zone resets 00:26:48.462 slat (usec): min=18, max=88102, avg=1504.16, stdev=4921.74 00:26:48.462 clat (usec): min=1286, max=404706, avg=148345.89, stdev=107069.21 00:26:48.462 lat (usec): min=1323, max=407821, avg=149850.05, stdev=108431.30 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 42], 00:26:48.462 | 30.00th=[ 53], 40.00th=[ 89], 50.00th=[ 123], 60.00th=[ 182], 00:26:48.462 | 70.00th=[ 213], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 330], 00:26:48.462 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 393], 00:26:48.462 | 99.99th=[ 405] 00:26:48.462 bw ( KiB/s): min=47104, max=270848, per=10.16%, avg=109139.15, stdev=63600.73, samples=20 00:26:48.462 iops : min= 184, max= 1058, avg=426.30, stdev=248.46, samples=20 00:26:48.462 lat (msec) : 2=0.09%, 4=0.21%, 10=1.11%, 20=5.34%, 50=22.21% 00:26:48.462 lat (msec) : 100=15.63%, 250=30.63%, 500=24.78% 00:26:48.462 cpu : usr=1.41%, sys=1.67%, ctx=2870, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job6: (groupid=0, jobs=1): err= 0: pid=763586: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=516, BW=129MiB/s (135MB/s)(1304MiB/10094msec); 0 zone resets 00:26:48.462 slat (usec): min=23, max=89115, avg=1676.61, stdev=3995.56 00:26:48.462 clat (msec): min=4, max=325, avg=122.11, stdev=62.96 00:26:48.462 lat (msec): min=4, max=325, avg=123.78, stdev=63.84 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 56], 20.00th=[ 62], 00:26:48.462 | 30.00th=[ 71], 40.00th=[ 106], 50.00th=[ 114], 60.00th=[ 128], 00:26:48.462 | 70.00th=[ 146], 80.00th=[ 182], 90.00th=[ 215], 95.00th=[ 236], 00:26:48.462 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 321], 00:26:48.462 | 99.99th=[ 326] 00:26:48.462 bw ( KiB/s): min=55808, max=314368, per=12.28%, avg=131942.40, stdev=63150.72, samples=20 00:26:48.462 iops : min= 218, max= 1228, avg=515.40, stdev=246.68, samples=20 00:26:48.462 lat (msec) : 10=0.13%, 20=1.19%, 50=6.40%, 100=28.50%, 250=60.21% 00:26:48.462 lat (msec) : 500=3.57% 00:26:48.462 cpu : usr=1.67%, sys=1.68%, ctx=2087, majf=0, minf=1 00:26:48.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:48.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.462 issued rwts: total=0,5217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.462 job7: (groupid=0, jobs=1): err= 0: pid=763588: Sat Jul 13 05:15:54 2024 00:26:48.462 write: IOPS=404, BW=101MiB/s (106MB/s)(1022MiB/10112msec); 0 zone resets 00:26:48.462 slat (usec): min=16, max=81079, avg=1498.44, stdev=5060.60 00:26:48.462 clat (usec): min=1704, max=436326, avg=156748.49, stdev=110456.35 00:26:48.462 lat (usec): min=1775, max=436376, avg=158246.93, stdev=111915.93 00:26:48.462 clat percentiles (msec): 00:26:48.462 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 33], 20.00th=[ 52], 00:26:48.462 | 30.00th=[ 68], 40.00th=[ 104], 50.00th=[ 132], 60.00th=[ 167], 00:26:48.462 | 70.00th=[ 203], 80.00th=[ 279], 90.00th=[ 334], 95.00th=[ 359], 00:26:48.462 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:26:48.462 | 99.99th=[ 439] 00:26:48.462 bw ( KiB/s): min=43008, max=279040, per=9.59%, avg=103040.00, stdev=57060.80, samples=20 00:26:48.463 iops : min= 168, max= 1090, avg=402.50, stdev=222.89, samples=20 00:26:48.463 lat (msec) : 2=0.05%, 4=0.20%, 10=1.47%, 20=3.94%, 50=13.72% 00:26:48.463 lat (msec) : 100=19.01%, 250=36.42%, 500=25.20% 00:26:48.463 cpu : usr=1.23%, sys=1.45%, ctx=2819, majf=0, minf=1 00:26:48.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:48.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.463 issued rwts: total=0,4088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.463 job8: (groupid=0, jobs=1): err= 0: pid=763594: Sat Jul 13 05:15:54 2024 00:26:48.463 write: IOPS=253, BW=63.2MiB/s (66.3MB/s)(648MiB/10245msec); 0 zone resets 00:26:48.463 slat (usec): min=21, max=213365, avg=3439.05, stdev=8861.91 00:26:48.463 clat (msec): min=4, max=623, avg=249.36, stdev=102.36 00:26:48.463 lat (msec): min=4, max=623, avg=252.80, stdev=103.56 00:26:48.463 clat percentiles (msec): 00:26:48.463 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 96], 20.00th=[ 199], 00:26:48.463 | 30.00th=[ 218], 40.00th=[ 230], 50.00th=[ 247], 60.00th=[ 275], 00:26:48.463 | 70.00th=[ 300], 80.00th=[ 317], 90.00th=[ 351], 95.00th=[ 418], 00:26:48.463 | 99.00th=[ 523], 99.50th=[ 531], 99.90th=[ 625], 99.95th=[ 625], 00:26:48.463 | 99.99th=[ 625] 00:26:48.463 bw ( KiB/s): min=32768, max=104960, per=6.03%, avg=64748.65, stdev=17182.57, samples=20 00:26:48.463 iops : min= 128, max= 410, avg=252.90, stdev=67.12, samples=20 00:26:48.463 lat (msec) : 10=0.81%, 20=1.89%, 50=3.97%, 100=3.78%, 250=41.47% 00:26:48.463 lat (msec) : 500=46.30%, 750=1.77% 00:26:48.463 cpu : usr=0.85%, sys=0.71%, ctx=1084, majf=0, minf=1 00:26:48.463 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:48.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.463 issued rwts: total=0,2592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.463 job9: (groupid=0, jobs=1): err= 0: pid=763595: Sat Jul 13 05:15:54 2024 00:26:48.463 write: IOPS=442, BW=111MiB/s (116MB/s)(1116MiB/10100msec); 0 zone resets 00:26:48.463 slat (usec): min=16, max=108960, avg=1761.91, stdev=4566.10 00:26:48.463 clat (usec): min=1974, max=413235, avg=142950.16, stdev=71154.83 00:26:48.463 lat (msec): min=2, max=413, avg=144.71, stdev=72.03 00:26:48.463 clat percentiles (msec): 00:26:48.463 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 50], 20.00th=[ 89], 00:26:48.463 | 30.00th=[ 113], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 148], 00:26:48.463 | 70.00th=[ 169], 80.00th=[ 199], 90.00th=[ 236], 95.00th=[ 271], 00:26:48.463 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 409], 99.95th=[ 409], 00:26:48.463 | 99.99th=[ 414] 00:26:48.463 bw ( KiB/s): min=58368, max=175104, per=10.49%, avg=112691.20, stdev=31326.13, samples=20 00:26:48.463 iops : min= 228, max= 684, avg=440.20, stdev=122.37, samples=20 00:26:48.463 lat (msec) : 2=0.02%, 4=0.49%, 10=1.77%, 20=2.69%, 50=5.11% 00:26:48.463 lat (msec) : 100=11.11%, 250=71.27%, 500=7.55% 00:26:48.463 cpu : usr=1.52%, sys=1.42%, ctx=2255, majf=0, minf=1 00:26:48.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:48.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.463 issued rwts: total=0,4465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.463 job10: (groupid=0, jobs=1): err= 0: pid=763596: Sat Jul 13 05:15:54 2024 00:26:48.463 write: IOPS=383, BW=96.0MiB/s (101MB/s)(983MiB/10242msec); 0 zone resets 00:26:48.463 slat (usec): min=18, max=307886, avg=1970.31, stdev=7901.45 00:26:48.463 clat (msec): min=2, max=592, avg=164.52, stdev=122.12 00:26:48.463 lat (msec): min=2, max=592, avg=166.49, stdev=123.70 00:26:48.463 clat percentiles (msec): 00:26:48.463 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 62], 00:26:48.463 | 30.00th=[ 81], 40.00th=[ 97], 50.00th=[ 118], 60.00th=[ 178], 00:26:48.463 | 70.00th=[ 222], 80.00th=[ 288], 90.00th=[ 359], 95.00th=[ 393], 00:26:48.463 | 99.00th=[ 489], 99.50th=[ 550], 99.90th=[ 575], 99.95th=[ 592], 00:26:48.463 | 99.99th=[ 592] 00:26:48.463 bw ( KiB/s): min=38912, max=205312, per=9.22%, avg=99046.40, stdev=57294.77, samples=20 00:26:48.463 iops : min= 152, max= 802, avg=386.90, stdev=223.81, samples=20 00:26:48.463 lat (msec) : 4=0.20%, 10=1.65%, 20=3.26%, 50=12.08%, 100=28.15% 00:26:48.463 lat (msec) : 250=29.35%, 500=24.36%, 750=0.94% 00:26:48.463 cpu : usr=1.01%, sys=1.33%, ctx=2125, majf=0, minf=1 00:26:48.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:48.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:48.463 issued rwts: total=0,3932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:48.463 00:26:48.463 Run status group 0 (all jobs): 00:26:48.463 WRITE: bw=1049MiB/s (1100MB/s), 60.3MiB/s-129MiB/s (63.2MB/s-135MB/s), io=10.5GiB (11.3GB), run=10094-10252msec 00:26:48.463 00:26:48.463 Disk stats (read/write): 00:26:48.463 nvme0n1: ios=49/7017, merge=0/0, ticks=55/1232618, in_queue=1232673, util=97.24% 00:26:48.463 nvme10n1: ios=43/9193, merge=0/0, ticks=1417/1219050, in_queue=1220467, util=99.92% 00:26:48.463 nvme1n1: ios=0/4886, merge=0/0, ticks=0/1229203, in_queue=1229203, util=97.51% 00:26:48.463 nvme2n1: ios=23/9193, merge=0/0, ticks=35/1210484, in_queue=1210519, util=97.69% 00:26:48.463 nvme3n1: ios=47/6087, merge=0/0, ticks=3366/1228516, in_queue=1231882, util=99.87% 00:26:48.463 nvme4n1: ios=0/8406, merge=0/0, ticks=0/1222160, in_queue=1222160, util=98.04% 00:26:48.463 nvme5n1: ios=42/10199, merge=0/0, ticks=2818/1210163, in_queue=1212981, util=99.87% 00:26:48.463 nvme6n1: ios=0/7982, merge=0/0, ticks=0/1222046, in_queue=1222046, util=98.34% 00:26:48.463 nvme7n1: ios=0/5131, merge=0/0, ticks=0/1229468, in_queue=1229468, util=98.83% 00:26:48.463 nvme8n1: ios=44/8661, merge=0/0, ticks=91/1210382, in_queue=1210473, util=99.45% 00:26:48.463 nvme9n1: ios=46/7809, merge=0/0, ticks=3625/1202883, in_queue=1206508, util=99.90% 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:48.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:48.463 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.463 05:15:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:49.028 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.028 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:49.286 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.286 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:49.544 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.544 05:15:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.544 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.544 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.544 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:50.108 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.108 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:50.109 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.109 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:50.367 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:50.367 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:50.367 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.367 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.367 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.626 05:15:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:50.626 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.626 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:50.884 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.884 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.885 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.885 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:51.146 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.146 rmmod nvme_tcp 00:26:51.146 rmmod nvme_fabrics 00:26:51.146 rmmod nvme_keyring 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 757394 ']' 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 757394 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 757394 ']' 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 757394 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:51.146 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 757394 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 757394' 00:26:51.147 killing process with pid 757394 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 757394 00:26:51.147 05:15:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 757394 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.425 05:16:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.323 05:16:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.323 00:26:56.323 real 1m5.638s 00:26:56.323 user 3m39.258s 00:26:56.323 sys 0m23.216s 00:26:56.323 05:16:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.323 05:16:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.324 ************************************ 00:26:56.324 END TEST nvmf_multiconnection 00:26:56.324 ************************************ 00:26:56.581 05:16:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:56.581 05:16:02 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.581 05:16:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:56.581 05:16:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.581 05:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.581 ************************************ 00:26:56.581 START TEST nvmf_initiator_timeout 00:26:56.581 ************************************ 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.581 * Looking for test storage... 00:26:56.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.581 05:16:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:58.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:58.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.480 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:58.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:58.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.481 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.741 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.741 05:16:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:58.741 00:26:58.741 --- 10.0.0.2 ping statistics --- 00:26:58.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.741 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:26:58.741 00:26:58.741 --- 10.0.0.1 ping statistics --- 00:26:58.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.741 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=767343 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 767343 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 767343 ']' 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.741 05:16:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.741 [2024-07-13 05:16:05.154820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:58.741 [2024-07-13 05:16:05.154990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.741 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.999 [2024-07-13 05:16:05.296520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.258 [2024-07-13 05:16:05.555457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.258 [2024-07-13 05:16:05.555531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.258 [2024-07-13 05:16:05.555569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.258 [2024-07-13 05:16:05.555590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.258 [2024-07-13 05:16:05.555611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.258 [2024-07-13 05:16:05.555728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.258 [2024-07-13 05:16:05.555807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.258 [2024-07-13 05:16:05.555909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.258 [2024-07-13 05:16:05.555917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 Malloc0 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 Delay0 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 [2024-07-13 05:16:06.166743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 [2024-07-13 05:16:06.195840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:00.391 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:00.391 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:00.391 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.391 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:00.391 05:16:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=767774 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:02.913 05:16:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:02.913 [global] 00:27:02.913 thread=1 00:27:02.913 invalidate=1 00:27:02.913 rw=write 00:27:02.913 time_based=1 00:27:02.913 runtime=60 00:27:02.913 ioengine=libaio 00:27:02.913 direct=1 00:27:02.913 bs=4096 00:27:02.913 iodepth=1 00:27:02.913 norandommap=0 00:27:02.913 numjobs=1 00:27:02.913 00:27:02.913 verify_dump=1 00:27:02.913 verify_backlog=512 00:27:02.913 verify_state_save=0 00:27:02.913 do_verify=1 00:27:02.913 verify=crc32c-intel 00:27:02.913 [job0] 00:27:02.913 filename=/dev/nvme0n1 00:27:02.913 Could not set queue depth (nvme0n1) 00:27:02.913 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:02.913 fio-3.35 00:27:02.913 Starting 1 thread 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.439 true 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.439 true 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.439 true 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.439 true 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.439 05:16:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.715 true 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.715 true 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.715 true 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.715 true 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:08.715 05:16:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 767774 00:28:04.951 00:28:04.951 job0: (groupid=0, jobs=1): err= 0: pid=767849: Sat Jul 13 05:17:09 2024 00:28:04.951 read: IOPS=7, BW=31.7KiB/s (32.4kB/s)(1900KiB/60014msec) 00:28:04.951 slat (usec): min=6, max=11804, avg=49.16, stdev=540.58 00:28:04.951 clat (usec): min=411, max=41146k, avg=125939.16, stdev=1886119.78 00:28:04.951 lat (usec): min=418, max=41146k, avg=125988.32, stdev=1886117.71 00:28:04.951 clat percentiles (usec): 00:28:04.951 | 1.00th=[ 416], 5.00th=[ 40633], 10.00th=[ 41157], 00:28:04.951 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:28:04.951 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:28:04.951 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:04.951 | 99.00th=[ 42206], 99.50th=[ 43779], 99.90th=[17112761], 00:28:04.951 | 99.95th=[17112761], 99.99th=[17112761] 00:28:04.951 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60014msec); 0 zone resets 00:28:04.951 slat (nsec): min=9961, max=57838, avg=18981.49, stdev=7048.34 00:28:04.951 clat (usec): min=244, max=414, avg=298.66, stdev=28.92 00:28:04.951 lat (usec): min=257, max=457, avg=317.64, stdev=31.82 00:28:04.951 clat percentiles (usec): 00:28:04.951 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:28:04.951 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 306], 00:28:04.951 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 355], 00:28:04.951 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 416], 99.95th=[ 416], 00:28:04.951 | 99.99th=[ 416] 00:28:04.951 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:04.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:04.951 lat (usec) : 250=0.71%, 500=52.99%, 750=0.10% 00:28:04.951 lat (msec) : 50=46.10%, >=2000=0.10% 00:28:04.951 cpu : usr=0.02%, sys=0.05%, ctx=988, majf=0, minf=2 00:28:04.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.951 issued rwts: total=475,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:04.951 00:28:04.951 Run status group 0 (all jobs): 00:28:04.951 READ: bw=31.7KiB/s (32.4kB/s), 31.7KiB/s-31.7KiB/s (32.4kB/s-32.4kB/s), io=1900KiB (1946kB), run=60014-60014msec 00:28:04.951 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60014-60014msec 00:28:04.951 00:28:04.951 Disk stats (read/write): 00:28:04.951 nvme0n1: ios=571/512, merge=0/0, ticks=19784/143, in_queue=19927, util=99.62% 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:04.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:04.951 nvmf hotplug test: fio successful as expected 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.951 rmmod nvme_tcp 00:28:04.951 rmmod nvme_fabrics 00:28:04.951 rmmod nvme_keyring 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:04.951 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 767343 ']' 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 767343 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 767343 ']' 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 767343 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 767343 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 767343' 00:28:04.952 killing process with pid 767343 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 767343 00:28:04.952 05:17:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 767343 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.952 05:17:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.854 05:17:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.854 00:28:06.854 real 1m10.081s 00:28:06.854 user 4m15.447s 00:28:06.854 sys 0m6.679s 00:28:06.854 05:17:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.854 05:17:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:06.854 ************************************ 00:28:06.854 END TEST nvmf_initiator_timeout 00:28:06.854 ************************************ 00:28:06.854 05:17:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:06.854 05:17:12 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:06.854 05:17:12 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:06.854 05:17:12 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:06.854 05:17:12 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.854 05:17:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:08.760 05:17:14 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:08.760 05:17:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.760 05:17:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.760 05:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.760 ************************************ 00:28:08.760 START TEST nvmf_perf_adq 00:28:08.760 ************************************ 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:08.760 * Looking for test storage... 00:28:08.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.760 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.761 05:17:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:10.667 05:17:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:11.235 05:17:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:13.141 05:17:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:28:18.410 00:28:18.410 --- 10.0.0.2 ping statistics --- 00:28:18.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.410 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:18.410 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:28:18.411 00:28:18.411 --- 10.0.0.1 ping statistics --- 00:28:18.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.411 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=779481 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 779481 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 779481 ']' 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.411 05:17:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.411 [2024-07-13 05:17:24.742609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:18.411 [2024-07-13 05:17:24.742753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.411 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.411 [2024-07-13 05:17:24.872570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.669 [2024-07-13 05:17:25.138447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.669 [2024-07-13 05:17:25.138514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.669 [2024-07-13 05:17:25.138540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.669 [2024-07-13 05:17:25.138560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.669 [2024-07-13 05:17:25.138580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.669 [2024-07-13 05:17:25.138687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.669 [2024-07-13 05:17:25.138750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.669 [2024-07-13 05:17:25.138788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.669 [2024-07-13 05:17:25.138799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.233 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.490 05:17:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 [2024-07-13 05:17:26.138752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 Malloc1 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.747 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 [2024-07-13 05:17:26.244340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.005 05:17:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.005 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=779762 00:28:20.005 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:20.005 05:17:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:20.005 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.902 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:21.902 05:17:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.902 05:17:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.902 05:17:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.902 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:21.902 "tick_rate": 2700000000, 00:28:21.902 "poll_groups": [ 00:28:21.902 { 00:28:21.902 "name": "nvmf_tgt_poll_group_000", 00:28:21.902 "admin_qpairs": 1, 00:28:21.903 "io_qpairs": 1, 00:28:21.903 "current_admin_qpairs": 1, 00:28:21.903 "current_io_qpairs": 1, 00:28:21.903 "pending_bdev_io": 0, 00:28:21.903 "completed_nvme_io": 15604, 00:28:21.903 "transports": [ 00:28:21.903 { 00:28:21.903 "trtype": "TCP" 00:28:21.903 } 00:28:21.903 ] 00:28:21.903 }, 00:28:21.903 { 00:28:21.903 "name": "nvmf_tgt_poll_group_001", 00:28:21.903 "admin_qpairs": 0, 00:28:21.903 "io_qpairs": 1, 00:28:21.903 "current_admin_qpairs": 0, 00:28:21.903 "current_io_qpairs": 1, 00:28:21.903 "pending_bdev_io": 0, 00:28:21.903 "completed_nvme_io": 17069, 00:28:21.903 "transports": [ 00:28:21.903 { 00:28:21.903 "trtype": "TCP" 00:28:21.903 } 00:28:21.903 ] 00:28:21.903 }, 00:28:21.903 { 00:28:21.903 "name": "nvmf_tgt_poll_group_002", 00:28:21.903 "admin_qpairs": 0, 00:28:21.903 "io_qpairs": 1, 00:28:21.903 "current_admin_qpairs": 0, 00:28:21.903 "current_io_qpairs": 1, 00:28:21.903 "pending_bdev_io": 0, 00:28:21.903 "completed_nvme_io": 16658, 00:28:21.903 "transports": [ 00:28:21.903 { 00:28:21.903 "trtype": "TCP" 00:28:21.903 } 00:28:21.903 ] 00:28:21.903 }, 00:28:21.903 { 00:28:21.903 "name": "nvmf_tgt_poll_group_003", 00:28:21.903 "admin_qpairs": 0, 00:28:21.903 "io_qpairs": 1, 00:28:21.903 "current_admin_qpairs": 0, 00:28:21.903 "current_io_qpairs": 1, 00:28:21.903 "pending_bdev_io": 0, 00:28:21.903 "completed_nvme_io": 16886, 00:28:21.903 "transports": [ 00:28:21.903 { 00:28:21.903 "trtype": "TCP" 00:28:21.903 } 00:28:21.903 ] 00:28:21.903 } 00:28:21.903 ] 00:28:21.903 }' 00:28:21.903 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:21.903 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:21.903 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:21.903 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:21.903 05:17:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 779762 00:28:30.000 Initializing NVMe Controllers 00:28:30.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:30.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:30.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:30.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:30.000 Initialization complete. Launching workers. 00:28:30.000 ======================================================== 00:28:30.000 Latency(us) 00:28:30.000 Device Information : IOPS MiB/s Average min max 00:28:30.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9190.15 35.90 6965.49 2267.10 11303.30 00:28:30.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9324.55 36.42 6862.79 1735.07 10493.93 00:28:30.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9375.95 36.62 6826.09 6142.61 11312.76 00:28:30.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8370.26 32.70 7646.07 2686.87 11316.80 00:28:30.000 ======================================================== 00:28:30.000 Total : 36260.91 141.64 7060.14 1735.07 11316.80 00:28:30.000 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.000 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.000 rmmod nvme_tcp 00:28:30.000 rmmod nvme_fabrics 00:28:30.257 rmmod nvme_keyring 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 779481 ']' 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 779481 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 779481 ']' 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 779481 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779481 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779481' 00:28:30.257 killing process with pid 779481 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 779481 00:28:30.257 05:17:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 779481 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.631 05:17:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.161 05:17:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.161 05:17:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:34.161 05:17:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:34.419 05:17:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:36.353 05:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.613 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.614 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:28:41.614 00:28:41.614 --- 10.0.0.2 ping statistics --- 00:28:41.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.614 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:28:41.614 00:28:41.614 --- 10.0.0.1 ping statistics --- 00:28:41.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.614 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:41.614 net.core.busy_poll = 1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:41.614 net.core.busy_read = 1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:41.614 05:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=782505 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 782505 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 782505 ']' 00:28:41.614 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.615 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.615 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.615 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.615 05:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.615 [2024-07-13 05:17:48.108117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:41.615 [2024-07-13 05:17:48.108267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.873 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.873 [2024-07-13 05:17:48.242125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.130 [2024-07-13 05:17:48.496793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.130 [2024-07-13 05:17:48.496884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.130 [2024-07-13 05:17:48.496915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.130 [2024-07-13 05:17:48.496935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.130 [2024-07-13 05:17:48.496956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.130 [2024-07-13 05:17:48.497080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.130 [2024-07-13 05:17:48.497149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.130 [2024-07-13 05:17:48.497240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.130 [2024-07-13 05:17:48.497252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.694 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.952 [2024-07-13 05:17:49.433029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.952 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:43.209 Malloc1 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:43.209 [2024-07-13 05:17:49.538976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=782664 00:28:43.209 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:43.210 05:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:43.210 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.107 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:45.107 05:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.107 05:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.107 05:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.107 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:45.107 "tick_rate": 2700000000, 00:28:45.107 "poll_groups": [ 00:28:45.107 { 00:28:45.107 "name": "nvmf_tgt_poll_group_000", 00:28:45.107 "admin_qpairs": 1, 00:28:45.107 "io_qpairs": 1, 00:28:45.107 "current_admin_qpairs": 1, 00:28:45.107 "current_io_qpairs": 1, 00:28:45.107 "pending_bdev_io": 0, 00:28:45.107 "completed_nvme_io": 18378, 00:28:45.107 "transports": [ 00:28:45.107 { 00:28:45.107 "trtype": "TCP" 00:28:45.107 } 00:28:45.107 ] 00:28:45.107 }, 00:28:45.107 { 00:28:45.107 "name": "nvmf_tgt_poll_group_001", 00:28:45.107 "admin_qpairs": 0, 00:28:45.107 "io_qpairs": 3, 00:28:45.107 "current_admin_qpairs": 0, 00:28:45.107 "current_io_qpairs": 3, 00:28:45.107 "pending_bdev_io": 0, 00:28:45.107 "completed_nvme_io": 20242, 00:28:45.107 "transports": [ 00:28:45.107 { 00:28:45.107 "trtype": "TCP" 00:28:45.107 } 00:28:45.107 ] 00:28:45.107 }, 00:28:45.107 { 00:28:45.107 "name": "nvmf_tgt_poll_group_002", 00:28:45.107 "admin_qpairs": 0, 00:28:45.107 "io_qpairs": 0, 00:28:45.107 "current_admin_qpairs": 0, 00:28:45.107 "current_io_qpairs": 0, 00:28:45.107 "pending_bdev_io": 0, 00:28:45.107 "completed_nvme_io": 0, 00:28:45.107 "transports": [ 00:28:45.107 { 00:28:45.107 "trtype": "TCP" 00:28:45.107 } 00:28:45.107 ] 00:28:45.107 }, 00:28:45.107 { 00:28:45.107 "name": "nvmf_tgt_poll_group_003", 00:28:45.107 "admin_qpairs": 0, 00:28:45.107 "io_qpairs": 0, 00:28:45.107 "current_admin_qpairs": 0, 00:28:45.107 "current_io_qpairs": 0, 00:28:45.107 "pending_bdev_io": 0, 00:28:45.108 "completed_nvme_io": 0, 00:28:45.108 "transports": [ 00:28:45.108 { 00:28:45.108 "trtype": "TCP" 00:28:45.108 } 00:28:45.108 ] 00:28:45.108 } 00:28:45.108 ] 00:28:45.108 }' 00:28:45.108 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:45.108 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:45.108 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:45.108 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:45.108 05:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 782664 00:28:55.078 Initializing NVMe Controllers 00:28:55.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:55.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:55.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:55.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:55.078 Initialization complete. Launching workers. 00:28:55.078 ======================================================== 00:28:55.078 Latency(us) 00:28:55.078 Device Information : IOPS MiB/s Average min max 00:28:55.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3449.29 13.47 18564.98 3181.03 65867.60 00:28:55.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3835.07 14.98 16751.21 2869.95 66027.79 00:28:55.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3771.77 14.73 16966.68 2828.86 68006.10 00:28:55.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10038.37 39.21 6375.41 2409.06 9397.64 00:28:55.078 ======================================================== 00:28:55.078 Total : 21094.50 82.40 12148.72 2409.06 68006.10 00:28:55.078 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.078 rmmod nvme_tcp 00:28:55.078 rmmod nvme_fabrics 00:28:55.078 rmmod nvme_keyring 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 782505 ']' 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 782505 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 782505 ']' 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 782505 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782505 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782505' 00:28:55.078 killing process with pid 782505 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 782505 00:28:55.078 05:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 782505 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.078 05:18:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.975 05:18:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:56.975 05:18:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:56.975 00:28:56.975 real 0m48.498s 00:28:56.975 user 2m46.762s 00:28:56.975 sys 0m12.310s 00:28:56.975 05:18:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:56.975 05:18:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:56.975 ************************************ 00:28:56.975 END TEST nvmf_perf_adq 00:28:56.975 ************************************ 00:28:56.975 05:18:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:56.975 05:18:03 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:56.975 05:18:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:56.975 05:18:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.975 05:18:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.975 ************************************ 00:28:56.975 START TEST nvmf_shutdown 00:28:56.975 ************************************ 00:28:56.975 05:18:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:57.231 * Looking for test storage... 00:28:57.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.231 ************************************ 00:28:57.231 START TEST nvmf_shutdown_tc1 00:28:57.231 ************************************ 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:57.231 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:57.232 05:18:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.166 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:59.167 00:28:59.167 --- 10.0.0.2 ping statistics --- 00:28:59.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.167 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:59.167 00:28:59.167 --- 10.0.0.1 ping statistics --- 00:28:59.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.167 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=786060 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 786060 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 786060 ']' 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.167 05:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.425 [2024-07-13 05:18:05.638896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:59.425 [2024-07-13 05:18:05.639032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.425 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.425 [2024-07-13 05:18:05.793182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.683 [2024-07-13 05:18:06.060995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.683 [2024-07-13 05:18:06.061057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.683 [2024-07-13 05:18:06.061086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.683 [2024-07-13 05:18:06.061107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.683 [2024-07-13 05:18:06.061129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.683 [2024-07-13 05:18:06.061259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.683 [2024-07-13 05:18:06.061367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.683 [2024-07-13 05:18:06.061394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.683 [2024-07-13 05:18:06.061383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.247 [2024-07-13 05:18:06.589328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.247 05:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.247 Malloc1 00:29:00.247 [2024-07-13 05:18:06.718474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.505 Malloc2 00:29:00.505 Malloc3 00:29:00.505 Malloc4 00:29:00.763 Malloc5 00:29:00.763 Malloc6 00:29:01.020 Malloc7 00:29:01.020 Malloc8 00:29:01.020 Malloc9 00:29:01.278 Malloc10 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=786535 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 786535 /var/tmp/bdevperf.sock 00:29:01.278 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 786535 ']' 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.279 { 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme$subsystem", 00:29:01.279 "trtype": "$TEST_TRANSPORT", 00:29:01.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "$NVMF_PORT", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.279 "hdgst": ${hdgst:-false}, 00:29:01.279 "ddgst": ${ddgst:-false} 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 } 00:29:01.279 EOF 00:29:01.279 )") 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:01.279 05:18:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme1", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme2", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme3", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme4", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme5", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme6", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme7", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme8", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.279 "hdgst": false, 00:29:01.279 "ddgst": false 00:29:01.279 }, 00:29:01.279 "method": "bdev_nvme_attach_controller" 00:29:01.279 },{ 00:29:01.279 "params": { 00:29:01.279 "name": "Nvme9", 00:29:01.279 "trtype": "tcp", 00:29:01.279 "traddr": "10.0.0.2", 00:29:01.279 "adrfam": "ipv4", 00:29:01.279 "trsvcid": "4420", 00:29:01.279 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.279 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.280 "hdgst": false, 00:29:01.280 "ddgst": false 00:29:01.280 }, 00:29:01.280 "method": "bdev_nvme_attach_controller" 00:29:01.280 },{ 00:29:01.280 "params": { 00:29:01.280 "name": "Nvme10", 00:29:01.280 "trtype": "tcp", 00:29:01.280 "traddr": "10.0.0.2", 00:29:01.280 "adrfam": "ipv4", 00:29:01.280 "trsvcid": "4420", 00:29:01.280 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.280 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.280 "hdgst": false, 00:29:01.280 "ddgst": false 00:29:01.280 }, 00:29:01.280 "method": "bdev_nvme_attach_controller" 00:29:01.280 }' 00:29:01.280 [2024-07-13 05:18:07.746474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:01.280 [2024-07-13 05:18:07.746620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:01.538 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.538 [2024-07-13 05:18:07.879473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.796 [2024-07-13 05:18:08.117236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 786535 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:04.322 05:18:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:05.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 786535 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 786060 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.254 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.254 { 00:29:05.254 "params": { 00:29:05.254 "name": "Nvme$subsystem", 00:29:05.254 "trtype": "$TEST_TRANSPORT", 00:29:05.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.254 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.255 { 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme$subsystem", 00:29:05.255 "trtype": "$TEST_TRANSPORT", 00:29:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "$NVMF_PORT", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.255 "hdgst": ${hdgst:-false}, 00:29:05.255 "ddgst": ${ddgst:-false} 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 } 00:29:05.255 EOF 00:29:05.255 )") 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:05.255 05:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme1", 00:29:05.255 "trtype": "tcp", 00:29:05.255 "traddr": "10.0.0.2", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "4420", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.255 "hdgst": false, 00:29:05.255 "ddgst": false 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 },{ 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme2", 00:29:05.255 "trtype": "tcp", 00:29:05.255 "traddr": "10.0.0.2", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "4420", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:05.255 "hdgst": false, 00:29:05.255 "ddgst": false 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 },{ 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme3", 00:29:05.255 "trtype": "tcp", 00:29:05.255 "traddr": "10.0.0.2", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "4420", 00:29:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:05.255 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:05.255 "hdgst": false, 00:29:05.255 "ddgst": false 00:29:05.255 }, 00:29:05.255 "method": "bdev_nvme_attach_controller" 00:29:05.255 },{ 00:29:05.255 "params": { 00:29:05.255 "name": "Nvme4", 00:29:05.255 "trtype": "tcp", 00:29:05.255 "traddr": "10.0.0.2", 00:29:05.255 "adrfam": "ipv4", 00:29:05.255 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme5", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme6", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme7", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme8", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme9", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 },{ 00:29:05.256 "params": { 00:29:05.256 "name": "Nvme10", 00:29:05.256 "trtype": "tcp", 00:29:05.256 "traddr": "10.0.0.2", 00:29:05.256 "adrfam": "ipv4", 00:29:05.256 "trsvcid": "4420", 00:29:05.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:05.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:05.256 "hdgst": false, 00:29:05.256 "ddgst": false 00:29:05.256 }, 00:29:05.256 "method": "bdev_nvme_attach_controller" 00:29:05.256 }' 00:29:05.256 [2024-07-13 05:18:11.512598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:05.256 [2024-07-13 05:18:11.512745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787310 ] 00:29:05.256 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.256 [2024-07-13 05:18:11.642157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.513 [2024-07-13 05:18:11.883424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.407 Running I/O for 1 seconds... 00:29:08.779 00:29:08.779 Latency(us) 00:29:08.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme1n1 : 1.21 211.08 13.19 0.00 0.00 297866.05 33787.45 296708.17 00:29:08.779 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme2n1 : 1.14 167.73 10.48 0.00 0.00 370615.18 24369.68 338651.21 00:29:08.779 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme3n1 : 1.15 223.07 13.94 0.00 0.00 273634.04 20388.98 302921.96 00:29:08.779 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme4n1 : 1.22 209.79 13.11 0.00 0.00 287060.95 21359.88 309135.74 00:29:08.779 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme5n1 : 1.14 168.90 10.56 0.00 0.00 348434.58 26408.58 310689.19 00:29:08.779 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme6n1 : 1.13 169.82 10.61 0.00 0.00 339796.07 26020.22 315349.52 00:29:08.779 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme7n1 : 1.23 208.33 13.02 0.00 0.00 274272.71 20777.34 309135.74 00:29:08.779 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme8n1 : 1.24 206.86 12.93 0.00 0.00 271711.00 20486.07 316902.97 00:29:08.779 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme9n1 : 1.25 205.30 12.83 0.00 0.00 269128.63 23690.05 313796.08 00:29:08.779 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.779 Verification LBA range: start 0x0 length 0x400 00:29:08.779 Nvme10n1 : 1.25 204.64 12.79 0.00 0.00 265133.32 22622.06 366613.24 00:29:08.779 =================================================================================================================== 00:29:08.779 Total : 1975.53 123.47 0.00 0.00 295453.09 20388.98 366613.24 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.712 rmmod nvme_tcp 00:29:09.712 rmmod nvme_fabrics 00:29:09.712 rmmod nvme_keyring 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 786060 ']' 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 786060 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 786060 ']' 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 786060 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 786060 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 786060' 00:29:09.712 killing process with pid 786060 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 786060 00:29:09.712 05:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 786060 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.990 05:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.893 05:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.893 00:29:14.893 real 0m17.469s 00:29:14.893 user 0m56.725s 00:29:14.893 sys 0m3.855s 00:29:14.893 05:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.893 05:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.893 ************************************ 00:29:14.893 END TEST nvmf_shutdown_tc1 00:29:14.893 ************************************ 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.893 ************************************ 00:29:14.893 START TEST nvmf_shutdown_tc2 00:29:14.893 ************************************ 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.893 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:29:14.894 00:29:14.894 --- 10.0.0.2 ping statistics --- 00:29:14.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.894 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:29:14.894 00:29:14.894 --- 10.0.0.1 ping statistics --- 00:29:14.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.894 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.894 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=788580 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 788580 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 788580 ']' 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.895 05:18:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.895 [2024-07-13 05:18:21.291451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:14.895 [2024-07-13 05:18:21.291575] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.895 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.168 [2024-07-13 05:18:21.433419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.426 [2024-07-13 05:18:21.697642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.427 [2024-07-13 05:18:21.697718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.427 [2024-07-13 05:18:21.697745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.427 [2024-07-13 05:18:21.697766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.427 [2024-07-13 05:18:21.697789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.427 [2024-07-13 05:18:21.697933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.427 [2024-07-13 05:18:21.697994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.427 [2024-07-13 05:18:21.698020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.427 [2024-07-13 05:18:21.698030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.993 [2024-07-13 05:18:22.277406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.993 05:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.993 Malloc1 00:29:15.993 [2024-07-13 05:18:22.418745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.251 Malloc2 00:29:16.251 Malloc3 00:29:16.251 Malloc4 00:29:16.522 Malloc5 00:29:16.522 Malloc6 00:29:16.522 Malloc7 00:29:16.785 Malloc8 00:29:16.785 Malloc9 00:29:17.043 Malloc10 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=788891 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 788891 /var/tmp/bdevperf.sock 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 788891 ']' 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.043 { 00:29:17.043 "params": { 00:29:17.043 "name": "Nvme$subsystem", 00:29:17.043 "trtype": "$TEST_TRANSPORT", 00:29:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.043 "adrfam": "ipv4", 00:29:17.043 "trsvcid": "$NVMF_PORT", 00:29:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.043 "hdgst": ${hdgst:-false}, 00:29:17.043 "ddgst": ${ddgst:-false} 00:29:17.043 }, 00:29:17.043 "method": "bdev_nvme_attach_controller" 00:29:17.043 } 00:29:17.043 EOF 00:29:17.043 )") 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.043 { 00:29:17.043 "params": { 00:29:17.043 "name": "Nvme$subsystem", 00:29:17.043 "trtype": "$TEST_TRANSPORT", 00:29:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.043 "adrfam": "ipv4", 00:29:17.043 "trsvcid": "$NVMF_PORT", 00:29:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.043 "hdgst": ${hdgst:-false}, 00:29:17.043 "ddgst": ${ddgst:-false} 00:29:17.043 }, 00:29:17.043 "method": "bdev_nvme_attach_controller" 00:29:17.043 } 00:29:17.043 EOF 00:29:17.043 )") 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.043 { 00:29:17.043 "params": { 00:29:17.043 "name": "Nvme$subsystem", 00:29:17.043 "trtype": "$TEST_TRANSPORT", 00:29:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.043 "adrfam": "ipv4", 00:29:17.043 "trsvcid": "$NVMF_PORT", 00:29:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.043 "hdgst": ${hdgst:-false}, 00:29:17.043 "ddgst": ${ddgst:-false} 00:29:17.043 }, 00:29:17.043 "method": "bdev_nvme_attach_controller" 00:29:17.043 } 00:29:17.043 EOF 00:29:17.043 )") 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.043 { 00:29:17.043 "params": { 00:29:17.043 "name": "Nvme$subsystem", 00:29:17.043 "trtype": "$TEST_TRANSPORT", 00:29:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.043 "adrfam": "ipv4", 00:29:17.043 "trsvcid": "$NVMF_PORT", 00:29:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.043 "hdgst": ${hdgst:-false}, 00:29:17.043 "ddgst": ${ddgst:-false} 00:29:17.043 }, 00:29:17.043 "method": "bdev_nvme_attach_controller" 00:29:17.043 } 00:29:17.043 EOF 00:29:17.043 )") 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.043 { 00:29:17.043 "params": { 00:29:17.043 "name": "Nvme$subsystem", 00:29:17.043 "trtype": "$TEST_TRANSPORT", 00:29:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.043 "adrfam": "ipv4", 00:29:17.043 "trsvcid": "$NVMF_PORT", 00:29:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.043 "hdgst": ${hdgst:-false}, 00:29:17.043 "ddgst": ${ddgst:-false} 00:29:17.043 }, 00:29:17.043 "method": "bdev_nvme_attach_controller" 00:29:17.043 } 00:29:17.043 EOF 00:29:17.043 )") 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.043 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.044 { 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme$subsystem", 00:29:17.044 "trtype": "$TEST_TRANSPORT", 00:29:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "$NVMF_PORT", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.044 "hdgst": ${hdgst:-false}, 00:29:17.044 "ddgst": ${ddgst:-false} 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 } 00:29:17.044 EOF 00:29:17.044 )") 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.044 { 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme$subsystem", 00:29:17.044 "trtype": "$TEST_TRANSPORT", 00:29:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "$NVMF_PORT", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.044 "hdgst": ${hdgst:-false}, 00:29:17.044 "ddgst": ${ddgst:-false} 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 } 00:29:17.044 EOF 00:29:17.044 )") 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.044 { 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme$subsystem", 00:29:17.044 "trtype": "$TEST_TRANSPORT", 00:29:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "$NVMF_PORT", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.044 "hdgst": ${hdgst:-false}, 00:29:17.044 "ddgst": ${ddgst:-false} 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 } 00:29:17.044 EOF 00:29:17.044 )") 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.044 { 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme$subsystem", 00:29:17.044 "trtype": "$TEST_TRANSPORT", 00:29:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "$NVMF_PORT", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.044 "hdgst": ${hdgst:-false}, 00:29:17.044 "ddgst": ${ddgst:-false} 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 } 00:29:17.044 EOF 00:29:17.044 )") 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.044 { 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme$subsystem", 00:29:17.044 "trtype": "$TEST_TRANSPORT", 00:29:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "$NVMF_PORT", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.044 "hdgst": ${hdgst:-false}, 00:29:17.044 "ddgst": ${ddgst:-false} 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 } 00:29:17.044 EOF 00:29:17.044 )") 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:17.044 05:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme1", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme2", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme3", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme4", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme5", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme6", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme7", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme8", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme9", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 },{ 00:29:17.044 "params": { 00:29:17.044 "name": "Nvme10", 00:29:17.044 "trtype": "tcp", 00:29:17.044 "traddr": "10.0.0.2", 00:29:17.044 "adrfam": "ipv4", 00:29:17.044 "trsvcid": "4420", 00:29:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:17.044 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:17.044 "hdgst": false, 00:29:17.044 "ddgst": false 00:29:17.044 }, 00:29:17.044 "method": "bdev_nvme_attach_controller" 00:29:17.044 }' 00:29:17.044 [2024-07-13 05:18:23.416620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:17.044 [2024-07-13 05:18:23.416783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788891 ] 00:29:17.044 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.303 [2024-07-13 05:18:23.548357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.303 [2024-07-13 05:18:23.787001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.203 Running I/O for 10 seconds... 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:19.771 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 788891 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 788891 ']' 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 788891 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.029 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 788891 00:29:20.287 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:20.287 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:20.287 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 788891' 00:29:20.287 killing process with pid 788891 00:29:20.287 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 788891 00:29:20.287 05:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 788891 00:29:20.287 Received shutdown signal, test time was about 1.123707 seconds 00:29:20.287 00:29:20.287 Latency(us) 00:29:20.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.287 Verification LBA range: start 0x0 length 0x400 00:29:20.287 Nvme1n1 : 1.07 183.59 11.47 0.00 0.00 339887.33 3543.80 304475.40 00:29:20.287 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.287 Verification LBA range: start 0x0 length 0x400 00:29:20.287 Nvme2n1 : 1.09 180.04 11.25 0.00 0.00 342581.64 7670.14 310689.19 00:29:20.287 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.287 Verification LBA range: start 0x0 length 0x400 00:29:20.287 Nvme3n1 : 1.12 228.98 14.31 0.00 0.00 265342.48 21845.33 310689.19 00:29:20.287 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.287 Verification LBA range: start 0x0 length 0x400 00:29:20.287 Nvme4n1 : 1.11 230.78 14.42 0.00 0.00 258048.57 21068.61 306028.85 00:29:20.287 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.287 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme5n1 : 1.12 227.98 14.25 0.00 0.00 256998.21 21359.88 310689.19 00:29:20.288 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.288 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme6n1 : 1.05 182.34 11.40 0.00 0.00 313873.19 22816.24 296708.17 00:29:20.288 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.288 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme7n1 : 1.08 177.15 11.07 0.00 0.00 317535.07 25437.68 310689.19 00:29:20.288 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.288 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme8n1 : 1.06 180.30 11.27 0.00 0.00 304764.65 22622.06 298261.62 00:29:20.288 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.288 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme9n1 : 1.10 175.10 10.94 0.00 0.00 308687.96 22427.88 312242.63 00:29:20.288 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.288 Verification LBA range: start 0x0 length 0x400 00:29:20.288 Nvme10n1 : 1.11 173.69 10.86 0.00 0.00 305137.34 25437.68 341758.10 00:29:20.288 =================================================================================================================== 00:29:20.288 Total : 1939.95 121.25 0.00 0.00 297728.95 3543.80 341758.10 00:29:21.663 05:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.595 rmmod nvme_tcp 00:29:22.595 rmmod nvme_fabrics 00:29:22.595 rmmod nvme_keyring 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 788580 ']' 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 788580 ']' 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 788580' 00:29:22.595 killing process with pid 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 788580 00:29:22.595 05:18:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 788580 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.876 05:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:27.776 00:29:27.776 real 0m12.739s 00:29:27.776 user 0m42.334s 00:29:27.776 sys 0m2.056s 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.776 ************************************ 00:29:27.776 END TEST nvmf_shutdown_tc2 00:29:27.776 ************************************ 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.776 ************************************ 00:29:27.776 START TEST nvmf_shutdown_tc3 00:29:27.776 ************************************ 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:27.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:27.776 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:27.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:27.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:27.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:27.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:29:27.777 00:29:27.777 --- 10.0.0.2 ping statistics --- 00:29:27.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.777 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:27.777 00:29:27.777 --- 10.0.0.1 ping statistics --- 00:29:27.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.777 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.777 05:18:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=790210 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 790210 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 790210 ']' 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.777 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.777 [2024-07-13 05:18:34.096377] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:27.777 [2024-07-13 05:18:34.096534] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.777 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.777 [2024-07-13 05:18:34.237904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.036 [2024-07-13 05:18:34.503356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.036 [2024-07-13 05:18:34.503426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.036 [2024-07-13 05:18:34.503455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.036 [2024-07-13 05:18:34.503477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.036 [2024-07-13 05:18:34.503499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.036 [2024-07-13 05:18:34.503631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.036 [2024-07-13 05:18:34.503733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.036 [2024-07-13 05:18:34.503798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.036 [2024-07-13 05:18:34.503805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:28.601 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.601 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:28.601 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.601 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.601 05:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.601 [2024-07-13 05:18:35.006076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.601 05:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.859 Malloc1 00:29:28.859 [2024-07-13 05:18:35.134305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.859 Malloc2 00:29:28.859 Malloc3 00:29:29.116 Malloc4 00:29:29.116 Malloc5 00:29:29.116 Malloc6 00:29:29.374 Malloc7 00:29:29.374 Malloc8 00:29:29.632 Malloc9 00:29:29.632 Malloc10 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=790522 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 790522 /var/tmp/bdevperf.sock 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 790522 ']' 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.632 "hdgst": ${hdgst:-false}, 00:29:29.632 "ddgst": ${ddgst:-false} 00:29:29.632 }, 00:29:29.632 "method": "bdev_nvme_attach_controller" 00:29:29.632 } 00:29:29.632 EOF 00:29:29.632 )") 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.632 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.632 { 00:29:29.632 "params": { 00:29:29.632 "name": "Nvme$subsystem", 00:29:29.632 "trtype": "$TEST_TRANSPORT", 00:29:29.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.632 "adrfam": "ipv4", 00:29:29.632 "trsvcid": "$NVMF_PORT", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.633 "hdgst": ${hdgst:-false}, 00:29:29.633 "ddgst": ${ddgst:-false} 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 } 00:29:29.633 EOF 00:29:29.633 )") 00:29:29.633 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.633 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:29.633 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:29.633 05:18:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme1", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme2", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme3", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme4", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme5", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme6", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme7", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme8", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme9", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 },{ 00:29:29.633 "params": { 00:29:29.633 "name": "Nvme10", 00:29:29.633 "trtype": "tcp", 00:29:29.633 "traddr": "10.0.0.2", 00:29:29.633 "adrfam": "ipv4", 00:29:29.633 "trsvcid": "4420", 00:29:29.633 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:29.633 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:29.633 "hdgst": false, 00:29:29.633 "ddgst": false 00:29:29.633 }, 00:29:29.633 "method": "bdev_nvme_attach_controller" 00:29:29.633 }' 00:29:29.892 [2024-07-13 05:18:36.155559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:29.892 [2024-07-13 05:18:36.155716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790522 ] 00:29:29.892 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.892 [2024-07-13 05:18:36.282477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.150 [2024-07-13 05:18:36.522074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.046 Running I/O for 10 seconds... 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:32.612 05:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:32.883 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 790210 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 790210 ']' 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 790210 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 790210 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 790210' 00:29:32.884 killing process with pid 790210 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 790210 00:29:32.884 05:18:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 790210 00:29:32.884 [2024-07-13 05:18:39.226822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.226968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.227973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.228511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.232181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.232225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.232253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.233971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.884 [2024-07-13 05:18:39.234137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.234983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.235211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.238408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.238450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.238473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.243999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.885 [2024-07-13 05:18:39.244142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.244825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.247944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.247978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.247999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.886 [2024-07-13 05:18:39.248729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.248984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:29:32.887 [2024-07-13 05:18:39.249994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.250957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.887 [2024-07-13 05:18:39.251499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.251512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1with the state(5) to be set 00:29:32.887 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.887 [2024-07-13 05:18:39.251539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-07-13 05:18:39.251563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.251585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-07-13 05:18:39.251659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.251680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.251772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.251904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.251944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.251963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.251979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-07-13 05:18:39.251981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.252022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.252041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.252060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-07-13 05:18:39.252079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.252105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.252157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.252191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-13 05:18:39.252209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.888 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.888 [2024-07-13 05:18:39.252264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.888 [2024-07-13 05:18:39.252283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.888 [2024-07-13 05:18:39.252294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1with the state(5) to be set 00:29:32.889 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.252438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-13 05:18:39.252513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.889 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1with the state(5) to be set 00:29:32.889 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 05:18:39.252645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.889 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.252862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.252956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.252980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.253303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.253375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.889 [2024-07-13 05:18:39.253663] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8e00 was disconnected and freed. reset controller. 00:29:32.889 [2024-07-13 05:18:39.254265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.889 [2024-07-13 05:18:39.254647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.889 [2024-07-13 05:18:39.254665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.889 [2024-07-13 05:18:39.254691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.254718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.254780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1with the state(5) to be set 00:29:32.890 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.254839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.254883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1with the state(5) to be set 00:29:32.890 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.254941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.254959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.254980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.254996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:1[2024-07-13 05:18:39.254999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.255020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:1[2024-07-13 05:18:39.255100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.890 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.255209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1with the state(5) to be set 00:29:32.890 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 05:18:39.255337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1[2024-07-13 05:18:39.255409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:32.890 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.890 [2024-07-13 05:18:39.255519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.890 [2024-07-13 05:18:39.255536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.890 [2024-07-13 05:18:39.255558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.891 [2024-07-13 05:18:39.255580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.891 [2024-07-13 05:18:39.255604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.891 [2024-07-13 05:18:39.255625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.891 [2024-07-13 05:18:39.255650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 05:18:39.255649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1with the state(5) to be set 00:29:32.891 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:32.891 [2024-07-13 05:18:39.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.255971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.255994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.256974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.256998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.891 [2024-07-13 05:18:39.257458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.891 [2024-07-13 05:18:39.257481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.257526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.257570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.257614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.257658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.257701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.892 [2024-07-13 05:18:39.257722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261375] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9d00 was disconnected and freed. reset controller. 00:29:32.892 [2024-07-13 05:18:39.261576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.261833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.261976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.261997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.262087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.262340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.262611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.262864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.262967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.262988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.263010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.263031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.263051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:32.892 [2024-07-13 05:18:39.263118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.263152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.263177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.263199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.892 [2024-07-13 05:18:39.263221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.892 [2024-07-13 05:18:39.263242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:32.893 [2024-07-13 05:18:39.263364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:32.893 [2024-07-13 05:18:39.263651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:32.893 [2024-07-13 05:18:39.263912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.263963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.263985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.264006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.264028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.264050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.893 [2024-07-13 05:18:39.264071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.264091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:32.893 [2024-07-13 05:18:39.265904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.265944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.265988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.266957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.266979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.893 [2024-07-13 05:18:39.267235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.893 [2024-07-13 05:18:39.267257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.267962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.267983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.268955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.894 [2024-07-13 05:18:39.268977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.894 [2024-07-13 05:18:39.269313] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9080 was disconnected and freed. reset controller. 00:29:32.894 [2024-07-13 05:18:39.271024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:32.894 [2024-07-13 05:18:39.271115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:32.894 [2024-07-13 05:18:39.273101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:32.894 [2024-07-13 05:18:39.273147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:32.894 [2024-07-13 05:18:39.273539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:32.895 [2024-07-13 05:18:39.276356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.895 [2024-07-13 05:18:39.276406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:32.895 [2024-07-13 05:18:39.276435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:32.895 [2024-07-13 05:18:39.276762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.276808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.276856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.276888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.276916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.276939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.276964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.277965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.277991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.895 [2024-07-13 05:18:39.278761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.895 [2024-07-13 05:18:39.278782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.278805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.278827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.278851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.278897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.278926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.278948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.278973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.278995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.279911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.279935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:29:32.896 [2024-07-13 05:18:39.280244] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8b80 was disconnected and freed. reset controller. 00:29:32.896 [2024-07-13 05:18:39.280852] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.896 [2024-07-13 05:18:39.280964] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.896 [2024-07-13 05:18:39.281172] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.896 [2024-07-13 05:18:39.281252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.281283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.281324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.281351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.281375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:29:32.896 [2024-07-13 05:18:39.281672] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9a80 was disconnected and freed. reset controller. 00:29:32.896 [2024-07-13 05:18:39.281935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.896 [2024-07-13 05:18:39.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:32.896 [2024-07-13 05:18:39.281997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:32.896 [2024-07-13 05:18:39.282130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.896 [2024-07-13 05:18:39.282165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:32.896 [2024-07-13 05:18:39.282188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:32.896 [2024-07-13 05:18:39.282216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:32.896 [2024-07-13 05:18:39.282301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.282362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.282434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.282479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.282524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.896 [2024-07-13 05:18:39.282570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.896 [2024-07-13 05:18:39.282591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.282955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.282977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.283970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.283995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.897 [2024-07-13 05:18:39.284577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.897 [2024-07-13 05:18:39.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.284970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.284994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.285398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.285420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:29:32.898 [2024-07-13 05:18:39.285698] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8680 was disconnected and freed. reset controller. 00:29:32.898 [2024-07-13 05:18:39.285804] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.898 [2024-07-13 05:18:39.287125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.287939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.287961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.898 [2024-07-13 05:18:39.288312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.898 [2024-07-13 05:18:39.288340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.288975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.289954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.289976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.290000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.290022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.290047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.290068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.290092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.290118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.290143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.899 [2024-07-13 05:18:39.290166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.899 [2024-07-13 05:18:39.290187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:29:32.899 [2024-07-13 05:18:39.290493] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9800 was disconnected and freed. reset controller. 00:29:32.899 [2024-07-13 05:18:39.291423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:32.899 [2024-07-13 05:18:39.291468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:32.899 [2024-07-13 05:18:39.291536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:32.899 [2024-07-13 05:18:39.291570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:32.899 [2024-07-13 05:18:39.291595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:32.899 [2024-07-13 05:18:39.291616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:32.899 [2024-07-13 05:18:39.291639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:32.900 [2024-07-13 05:18:39.291684] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.900 [2024-07-13 05:18:39.291721] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.900 [2024-07-13 05:18:39.291766] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.900 [2024-07-13 05:18:39.291814] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.900 [2024-07-13 05:18:39.291845] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.900 [2024-07-13 05:18:39.294387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.900 [2024-07-13 05:18:39.294444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.900 [2024-07-13 05:18:39.294473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:32.900 [2024-07-13 05:18:39.294675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.900 [2024-07-13 05:18:39.294713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:29:32.900 [2024-07-13 05:18:39.294738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:32.900 [2024-07-13 05:18:39.294897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.900 [2024-07-13 05:18:39.294933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:32.900 [2024-07-13 05:18:39.294956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:32.900 [2024-07-13 05:18:39.294977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:32.900 [2024-07-13 05:18:39.294995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:32.900 [2024-07-13 05:18:39.295014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:32.900 [2024-07-13 05:18:39.295049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:32.900 [2024-07-13 05:18:39.295071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:32.900 [2024-07-13 05:18:39.295090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:32.900 [2024-07-13 05:18:39.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.295980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.900 [2024-07-13 05:18:39.296716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.900 [2024-07-13 05:18:39.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.296760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.296781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.296803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.296825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.296863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.296894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.296919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.296945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.296970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.296992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.297976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.297999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.298021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.298052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.298074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.298098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.298124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.298171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.298211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.298232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.298252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8900 is same with the state(5) to be set 00:29:32.901 [2024-07-13 05:18:39.300252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.300347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.300375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.300400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.300422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.901 [2024-07-13 05:18:39.300458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.901 [2024-07-13 05:18:39.300480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.300957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.300981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.301965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.301987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.902 [2024-07-13 05:18:39.302445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.902 [2024-07-13 05:18:39.302468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.302957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.302981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.303312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:29:32.903 [2024-07-13 05:18:39.304826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.304881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.304916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.304940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.304965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.304999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.305959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.903 [2024-07-13 05:18:39.305981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.903 [2024-07-13 05:18:39.306004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.306958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.306981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.904 [2024-07-13 05:18:39.307831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.904 [2024-07-13 05:18:39.307874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:29:32.904 [2024-07-13 05:18:39.312896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.904 [2024-07-13 05:18:39.312931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.904 [2024-07-13 05:18:39.312955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:32.904 [2024-07-13 05:18:39.312983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:32.904 task offset: 24704 on job bdev=Nvme4n1 fails 00:29:32.904 00:29:32.904 Latency(us) 00:29:32.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme1n1 ended in about 1.01 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme1n1 : 1.01 126.92 7.93 63.46 0.00 332481.42 24855.13 306028.85 00:29:32.905 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme2n1 ended in about 1.01 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme2n1 : 1.01 126.13 7.88 63.07 0.00 327976.83 26020.22 315349.52 00:29:32.905 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme3n1 ended in about 1.00 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme3n1 : 1.00 191.54 11.97 63.85 0.00 237872.17 15146.10 302921.96 00:29:32.905 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme4n1 ended in about 0.98 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme4n1 : 0.98 195.67 12.23 65.22 0.00 227607.13 14951.92 299815.06 00:29:32.905 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme5n1 ended in about 0.99 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme5n1 : 0.99 129.54 8.10 64.77 0.00 299235.56 17573.36 318456.41 00:29:32.905 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme6n1 ended in about 1.02 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme6n1 : 1.02 125.51 7.84 62.75 0.00 303232.44 26991.12 316902.97 00:29:32.905 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme7n1 ended in about 1.02 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme7n1 : 1.02 124.95 7.81 62.47 0.00 298052.77 20874.43 310689.19 00:29:32.905 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme8n1 ended in about 1.01 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme8n1 : 1.01 131.72 8.23 62.39 0.00 280746.71 20777.34 309135.74 00:29:32.905 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme9n1 ended in about 1.01 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme9n1 : 1.01 125.15 7.82 1.99 0.00 411491.75 27185.30 379040.81 00:29:32.905 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.905 Job: Nvme10n1 ended in about 0.99 seconds with error 00:29:32.905 Verification LBA range: start 0x0 length 0x400 00:29:32.905 Nvme10n1 : 0.99 129.76 8.11 64.88 0.00 265862.76 26214.40 335544.32 00:29:32.905 =================================================================================================================== 00:29:32.905 Total : 1406.89 87.93 574.85 0.00 290550.11 14951.92 379040.81 00:29:33.164 [2024-07-13 05:18:39.395103] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:33.164 [2024-07-13 05:18:39.395220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:33.164 [2024-07-13 05:18:39.395589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.395644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.395683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.395854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.395898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.395921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.395958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.395994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.396095] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.396132] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.396177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.396215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.397699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.397747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.397779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.397930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.397968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.397991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.398155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.398189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.398212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.398238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:33.164 [2024-07-13 05:18:39.398260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:33.164 [2024-07-13 05:18:39.398283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:33.164 [2024-07-13 05:18:39.398315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:33.164 [2024-07-13 05:18:39.398336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:33.164 [2024-07-13 05:18:39.398354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:33.164 [2024-07-13 05:18:39.398403] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398449] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398475] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398499] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398525] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398549] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.398574] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.164 [2024-07-13 05:18:39.400133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:33.164 [2024-07-13 05:18:39.400178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:33.164 [2024-07-13 05:18:39.400219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:33.164 [2024-07-13 05:18:39.400277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.164 [2024-07-13 05:18:39.400303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.164 [2024-07-13 05:18:39.400364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.400399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.400426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:33.164 [2024-07-13 05:18:39.400448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.164 [2024-07-13 05:18:39.400471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.164 [2024-07-13 05:18:39.400490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.164 [2024-07-13 05:18:39.400519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:33.164 [2024-07-13 05:18:39.400539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:33.164 [2024-07-13 05:18:39.400557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:33.164 [2024-07-13 05:18:39.400714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.164 [2024-07-13 05:18:39.400742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.164 [2024-07-13 05:18:39.400940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.400976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.400999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.401133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.401167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.401190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.401345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.164 [2024-07-13 05:18:39.401379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:33.164 [2024-07-13 05:18:39.401402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:33.164 [2024-07-13 05:18:39.401423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:33.164 [2024-07-13 05:18:39.401441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.401459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:33.165 [2024-07-13 05:18:39.401486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:33.165 [2024-07-13 05:18:39.401508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.401527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:33.165 [2024-07-13 05:18:39.401552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:33.165 [2024-07-13 05:18:39.401587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.401605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:33.165 [2024-07-13 05:18:39.401697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.165 [2024-07-13 05:18:39.401724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.165 [2024-07-13 05:18:39.401742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.165 [2024-07-13 05:18:39.401765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:33.165 [2024-07-13 05:18:39.401793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:33.165 [2024-07-13 05:18:39.401825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:33.165 [2024-07-13 05:18:39.401929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:33.165 [2024-07-13 05:18:39.401958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.401979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:33.165 [2024-07-13 05:18:39.402005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:33.165 [2024-07-13 05:18:39.402025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.402044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:33.165 [2024-07-13 05:18:39.402069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:33.165 [2024-07-13 05:18:39.402089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:33.165 [2024-07-13 05:18:39.402107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:33.165 [2024-07-13 05:18:39.402166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.165 [2024-07-13 05:18:39.402207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.165 [2024-07-13 05:18:39.402224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.441 05:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:36.441 05:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 790522 00:29:37.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (790522) - No such process 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.004 rmmod nvme_tcp 00:29:37.004 rmmod nvme_fabrics 00:29:37.004 rmmod nvme_keyring 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.004 05:18:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:38.907 00:29:38.907 real 0m11.517s 00:29:38.907 user 0m32.801s 00:29:38.907 sys 0m2.029s 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.907 ************************************ 00:29:38.907 END TEST nvmf_shutdown_tc3 00:29:38.907 ************************************ 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:38.907 00:29:38.907 real 0m41.930s 00:29:38.907 user 2m11.942s 00:29:38.907 sys 0m8.076s 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.907 05:18:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.907 ************************************ 00:29:38.907 END TEST nvmf_shutdown 00:29:38.907 ************************************ 00:29:38.907 05:18:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:38.907 05:18:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:38.907 05:18:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:38.907 05:18:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.166 05:18:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:39.166 05:18:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:39.166 05:18:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.166 05:18:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:39.166 05:18:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:39.166 05:18:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:39.166 05:18:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.166 05:18:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.166 ************************************ 00:29:39.166 START TEST nvmf_multicontroller 00:29:39.166 ************************************ 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:39.166 * Looking for test storage... 00:29:39.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:39.166 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.167 05:18:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:29:41.070 00:29:41.070 --- 10.0.0.2 ping statistics --- 00:29:41.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.070 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:41.070 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:29:41.070 00:29:41.070 --- 10.0.0.1 ping statistics --- 00:29:41.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.071 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=793298 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 793298 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 793298 ']' 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.071 05:18:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.071 [2024-07-13 05:18:47.543815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:41.071 [2024-07-13 05:18:47.543972] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.329 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.329 [2024-07-13 05:18:47.681728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.587 [2024-07-13 05:18:47.938429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.587 [2024-07-13 05:18:47.938520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.587 [2024-07-13 05:18:47.938554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.587 [2024-07-13 05:18:47.938575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.587 [2024-07-13 05:18:47.938597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.587 [2024-07-13 05:18:47.938733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.587 [2024-07-13 05:18:47.938824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.587 [2024-07-13 05:18:47.938834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.153 [2024-07-13 05:18:48.568078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.153 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 Malloc0 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 [2024-07-13 05:18:48.688818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 [2024-07-13 05:18:48.696680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 Malloc1 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=793456 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 793456 /var/tmp/bdevperf.sock 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 793456 ']' 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.412 05:18:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.372 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.372 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:43.373 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:43.373 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.373 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.631 NVMe0n1 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.631 1 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.631 request: 00:29:43.631 { 00:29:43.631 "name": "NVMe0", 00:29:43.631 "trtype": "tcp", 00:29:43.631 "traddr": "10.0.0.2", 00:29:43.631 "adrfam": "ipv4", 00:29:43.631 "trsvcid": "4420", 00:29:43.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.631 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:43.631 "hostaddr": "10.0.0.2", 00:29:43.631 "hostsvcid": "60000", 00:29:43.631 "prchk_reftag": false, 00:29:43.631 "prchk_guard": false, 00:29:43.631 "hdgst": false, 00:29:43.631 "ddgst": false, 00:29:43.631 "method": "bdev_nvme_attach_controller", 00:29:43.631 "req_id": 1 00:29:43.631 } 00:29:43.631 Got JSON-RPC error response 00:29:43.631 response: 00:29:43.631 { 00:29:43.631 "code": -114, 00:29:43.631 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:43.631 } 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.631 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.631 request: 00:29:43.631 { 00:29:43.631 "name": "NVMe0", 00:29:43.631 "trtype": "tcp", 00:29:43.631 "traddr": "10.0.0.2", 00:29:43.631 "adrfam": "ipv4", 00:29:43.631 "trsvcid": "4420", 00:29:43.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:43.631 "hostaddr": "10.0.0.2", 00:29:43.631 "hostsvcid": "60000", 00:29:43.631 "prchk_reftag": false, 00:29:43.631 "prchk_guard": false, 00:29:43.632 "hdgst": false, 00:29:43.632 "ddgst": false, 00:29:43.632 "method": "bdev_nvme_attach_controller", 00:29:43.632 "req_id": 1 00:29:43.632 } 00:29:43.632 Got JSON-RPC error response 00:29:43.632 response: 00:29:43.632 { 00:29:43.632 "code": -114, 00:29:43.632 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:43.632 } 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.632 request: 00:29:43.632 { 00:29:43.632 "name": "NVMe0", 00:29:43.632 "trtype": "tcp", 00:29:43.632 "traddr": "10.0.0.2", 00:29:43.632 "adrfam": "ipv4", 00:29:43.632 "trsvcid": "4420", 00:29:43.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.632 "hostaddr": "10.0.0.2", 00:29:43.632 "hostsvcid": "60000", 00:29:43.632 "prchk_reftag": false, 00:29:43.632 "prchk_guard": false, 00:29:43.632 "hdgst": false, 00:29:43.632 "ddgst": false, 00:29:43.632 "multipath": "disable", 00:29:43.632 "method": "bdev_nvme_attach_controller", 00:29:43.632 "req_id": 1 00:29:43.632 } 00:29:43.632 Got JSON-RPC error response 00:29:43.632 response: 00:29:43.632 { 00:29:43.632 "code": -114, 00:29:43.632 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:43.632 } 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.632 request: 00:29:43.632 { 00:29:43.632 "name": "NVMe0", 00:29:43.632 "trtype": "tcp", 00:29:43.632 "traddr": "10.0.0.2", 00:29:43.632 "adrfam": "ipv4", 00:29:43.632 "trsvcid": "4420", 00:29:43.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.632 "hostaddr": "10.0.0.2", 00:29:43.632 "hostsvcid": "60000", 00:29:43.632 "prchk_reftag": false, 00:29:43.632 "prchk_guard": false, 00:29:43.632 "hdgst": false, 00:29:43.632 "ddgst": false, 00:29:43.632 "multipath": "failover", 00:29:43.632 "method": "bdev_nvme_attach_controller", 00:29:43.632 "req_id": 1 00:29:43.632 } 00:29:43.632 Got JSON-RPC error response 00:29:43.632 response: 00:29:43.632 { 00:29:43.632 "code": -114, 00:29:43.632 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:43.632 } 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.632 05:18:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.890 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.890 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:43.890 05:18:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:45.266 0 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 793456 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 793456 ']' 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 793456 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 793456 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 793456' 00:29:45.266 killing process with pid 793456 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 793456 00:29:45.266 05:18:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 793456 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:46.200 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:46.200 [2024-07-13 05:18:48.887030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:46.200 [2024-07-13 05:18:48.887207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793456 ] 00:29:46.200 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.200 [2024-07-13 05:18:49.016281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.200 [2024-07-13 05:18:49.251826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.200 [2024-07-13 05:18:50.365125] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0f3b112f-c707-4fb5-a590-26202b1d295e already exists 00:29:46.200 [2024-07-13 05:18:50.365204] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0f3b112f-c707-4fb5-a590-26202b1d295e alias for bdev NVMe1n1 00:29:46.200 [2024-07-13 05:18:50.365245] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:46.200 Running I/O for 1 seconds... 00:29:46.200 00:29:46.200 Latency(us) 00:29:46.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.200 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:46.200 NVMe0n1 : 1.01 13278.40 51.87 0.00 0.00 9622.19 5801.15 19806.44 00:29:46.200 =================================================================================================================== 00:29:46.200 Total : 13278.40 51.87 0.00 0.00 9622.19 5801.15 19806.44 00:29:46.200 Received shutdown signal, test time was about 1.000000 seconds 00:29:46.200 00:29:46.200 Latency(us) 00:29:46.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.200 =================================================================================================================== 00:29:46.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.200 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.200 rmmod nvme_tcp 00:29:46.200 rmmod nvme_fabrics 00:29:46.200 rmmod nvme_keyring 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 793298 ']' 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 793298 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 793298 ']' 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 793298 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 793298 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:46.200 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 793298' 00:29:46.200 killing process with pid 793298 00:29:46.201 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 793298 00:29:46.201 05:18:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 793298 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.100 05:18:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.999 05:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.999 00:29:49.999 real 0m10.796s 00:29:49.999 user 0m22.517s 00:29:49.999 sys 0m2.474s 00:29:49.999 05:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.999 05:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.999 ************************************ 00:29:49.999 END TEST nvmf_multicontroller 00:29:49.999 ************************************ 00:29:49.999 05:18:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:49.999 05:18:56 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:49.999 05:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:49.999 05:18:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.999 05:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.999 ************************************ 00:29:49.999 START TEST nvmf_aer 00:29:49.999 ************************************ 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:49.999 * Looking for test storage... 00:29:49.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.999 05:18:56 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.000 05:18:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:51.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:51.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:51.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:51.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:29:51.898 00:29:51.898 --- 10.0.0.2 ping statistics --- 00:29:51.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.898 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:29:51.898 00:29:51.898 --- 10.0.0.1 ping statistics --- 00:29:51.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.898 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=796048 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 796048 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 796048 ']' 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.898 05:18:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:52.156 [2024-07-13 05:18:58.463152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:52.156 [2024-07-13 05:18:58.463303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.156 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.156 [2024-07-13 05:18:58.597871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.413 [2024-07-13 05:18:58.858625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.413 [2024-07-13 05:18:58.858713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.413 [2024-07-13 05:18:58.858742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.413 [2024-07-13 05:18:58.858764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.413 [2024-07-13 05:18:58.858785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.413 [2024-07-13 05:18:58.858940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.413 [2024-07-13 05:18:58.858979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.413 [2024-07-13 05:18:58.859043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.413 [2024-07-13 05:18:58.859053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 [2024-07-13 05:18:59.395202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.978 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.236 Malloc0 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.236 [2024-07-13 05:18:59.500984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.236 [ 00:29:53.236 { 00:29:53.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:53.236 "subtype": "Discovery", 00:29:53.236 "listen_addresses": [], 00:29:53.236 "allow_any_host": true, 00:29:53.236 "hosts": [] 00:29:53.236 }, 00:29:53.236 { 00:29:53.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.236 "subtype": "NVMe", 00:29:53.236 "listen_addresses": [ 00:29:53.236 { 00:29:53.236 "trtype": "TCP", 00:29:53.236 "adrfam": "IPv4", 00:29:53.236 "traddr": "10.0.0.2", 00:29:53.236 "trsvcid": "4420" 00:29:53.236 } 00:29:53.236 ], 00:29:53.236 "allow_any_host": true, 00:29:53.236 "hosts": [], 00:29:53.236 "serial_number": "SPDK00000000000001", 00:29:53.236 "model_number": "SPDK bdev Controller", 00:29:53.236 "max_namespaces": 2, 00:29:53.236 "min_cntlid": 1, 00:29:53.236 "max_cntlid": 65519, 00:29:53.236 "namespaces": [ 00:29:53.236 { 00:29:53.236 "nsid": 1, 00:29:53.236 "bdev_name": "Malloc0", 00:29:53.236 "name": "Malloc0", 00:29:53.236 "nguid": "28E3B65CD4544867BB5FD79F9A45280D", 00:29:53.236 "uuid": "28e3b65c-d454-4867-bb5f-d79f9a45280d" 00:29:53.236 } 00:29:53.236 ] 00:29:53.236 } 00:29:53.236 ] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=796204 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:53.236 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:53.236 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 Malloc1 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.493 05:18:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.751 [ 00:29:53.751 { 00:29:53.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:53.751 "subtype": "Discovery", 00:29:53.751 "listen_addresses": [], 00:29:53.751 "allow_any_host": true, 00:29:53.751 "hosts": [] 00:29:53.751 }, 00:29:53.751 { 00:29:53.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.751 "subtype": "NVMe", 00:29:53.751 "listen_addresses": [ 00:29:53.751 { 00:29:53.751 "trtype": "TCP", 00:29:53.751 "adrfam": "IPv4", 00:29:53.751 "traddr": "10.0.0.2", 00:29:53.751 "trsvcid": "4420" 00:29:53.751 } 00:29:53.751 ], 00:29:53.751 "allow_any_host": true, 00:29:53.751 "hosts": [], 00:29:53.751 "serial_number": "SPDK00000000000001", 00:29:53.751 "model_number": "SPDK bdev Controller", 00:29:53.751 "max_namespaces": 2, 00:29:53.751 "min_cntlid": 1, 00:29:53.751 "max_cntlid": 65519, 00:29:53.751 "namespaces": [ 00:29:53.751 { 00:29:53.751 "nsid": 1, 00:29:53.751 "bdev_name": "Malloc0", 00:29:53.751 "name": "Malloc0", 00:29:53.751 "nguid": "28E3B65CD4544867BB5FD79F9A45280D", 00:29:53.751 "uuid": "28e3b65c-d454-4867-bb5f-d79f9a45280d" 00:29:53.751 }, 00:29:53.751 { 00:29:53.751 "nsid": 2, 00:29:53.751 "bdev_name": "Malloc1", 00:29:53.751 "name": "Malloc1", 00:29:53.751 "nguid": "4650411EC9A840FD8A64D9A340C22338", 00:29:53.751 "uuid": "4650411e-c9a8-40fd-8a64-d9a340c22338" 00:29:53.751 } 00:29:53.751 ] 00:29:53.751 } 00:29:53.751 ] 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 796204 00:29:53.751 Asynchronous Event Request test 00:29:53.751 Attaching to 10.0.0.2 00:29:53.751 Attached to 10.0.0.2 00:29:53.751 Registering asynchronous event callbacks... 00:29:53.751 Starting namespace attribute notice tests for all controllers... 00:29:53.751 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:53.751 aer_cb - Changed Namespace 00:29:53.751 Cleaning up... 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.751 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.030 rmmod nvme_tcp 00:29:54.030 rmmod nvme_fabrics 00:29:54.030 rmmod nvme_keyring 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 796048 ']' 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 796048 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 796048 ']' 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 796048 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796048 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796048' 00:29:54.030 killing process with pid 796048 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 796048 00:29:54.030 05:19:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 796048 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.407 05:19:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.304 05:19:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.304 00:29:57.304 real 0m7.450s 00:29:57.304 user 0m10.698s 00:29:57.304 sys 0m2.093s 00:29:57.304 05:19:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.304 05:19:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:57.304 ************************************ 00:29:57.304 END TEST nvmf_aer 00:29:57.304 ************************************ 00:29:57.304 05:19:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:57.304 05:19:03 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.304 05:19:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:57.304 05:19:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.304 05:19:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.304 ************************************ 00:29:57.304 START TEST nvmf_async_init 00:29:57.304 ************************************ 00:29:57.304 05:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.563 * Looking for test storage... 00:29:57.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=65a04432ce9247f89f548d085688bf26 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.563 05:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.464 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.724 05:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:29:59.724 00:29:59.724 --- 10.0.0.2 ping statistics --- 00:29:59.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.724 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:29:59.724 00:29:59.724 --- 10.0.0.1 ping statistics --- 00:29:59.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.724 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=798282 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 798282 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 798282 ']' 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.724 05:19:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.724 [2024-07-13 05:19:06.128643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:59.724 [2024-07-13 05:19:06.128775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.724 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.006 [2024-07-13 05:19:06.272925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.274 [2024-07-13 05:19:06.541060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.274 [2024-07-13 05:19:06.541124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.274 [2024-07-13 05:19:06.541168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.274 [2024-07-13 05:19:06.541213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.274 [2024-07-13 05:19:06.541236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.274 [2024-07-13 05:19:06.541288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 [2024-07-13 05:19:07.125830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 null0 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 65a04432ce9247f89f548d085688bf26 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:00.841 [2024-07-13 05:19:07.166120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.841 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.100 nvme0n1 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.100 [ 00:30:01.100 { 00:30:01.100 "name": "nvme0n1", 00:30:01.100 "aliases": [ 00:30:01.100 "65a04432-ce92-47f8-9f54-8d085688bf26" 00:30:01.100 ], 00:30:01.100 "product_name": "NVMe disk", 00:30:01.100 "block_size": 512, 00:30:01.100 "num_blocks": 2097152, 00:30:01.100 "uuid": "65a04432-ce92-47f8-9f54-8d085688bf26", 00:30:01.100 "assigned_rate_limits": { 00:30:01.100 "rw_ios_per_sec": 0, 00:30:01.100 "rw_mbytes_per_sec": 0, 00:30:01.100 "r_mbytes_per_sec": 0, 00:30:01.100 "w_mbytes_per_sec": 0 00:30:01.100 }, 00:30:01.100 "claimed": false, 00:30:01.100 "zoned": false, 00:30:01.100 "supported_io_types": { 00:30:01.100 "read": true, 00:30:01.100 "write": true, 00:30:01.100 "unmap": false, 00:30:01.100 "flush": true, 00:30:01.100 "reset": true, 00:30:01.100 "nvme_admin": true, 00:30:01.100 "nvme_io": true, 00:30:01.100 "nvme_io_md": false, 00:30:01.100 "write_zeroes": true, 00:30:01.100 "zcopy": false, 00:30:01.100 "get_zone_info": false, 00:30:01.100 "zone_management": false, 00:30:01.100 "zone_append": false, 00:30:01.100 "compare": true, 00:30:01.100 "compare_and_write": true, 00:30:01.100 "abort": true, 00:30:01.100 "seek_hole": false, 00:30:01.100 "seek_data": false, 00:30:01.100 "copy": true, 00:30:01.100 "nvme_iov_md": false 00:30:01.100 }, 00:30:01.100 "memory_domains": [ 00:30:01.100 { 00:30:01.100 "dma_device_id": "system", 00:30:01.100 "dma_device_type": 1 00:30:01.100 } 00:30:01.100 ], 00:30:01.100 "driver_specific": { 00:30:01.100 "nvme": [ 00:30:01.100 { 00:30:01.100 "trid": { 00:30:01.100 "trtype": "TCP", 00:30:01.100 "adrfam": "IPv4", 00:30:01.100 "traddr": "10.0.0.2", 00:30:01.100 "trsvcid": "4420", 00:30:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:01.100 }, 00:30:01.100 "ctrlr_data": { 00:30:01.100 "cntlid": 1, 00:30:01.100 "vendor_id": "0x8086", 00:30:01.100 "model_number": "SPDK bdev Controller", 00:30:01.100 "serial_number": "00000000000000000000", 00:30:01.100 "firmware_revision": "24.09", 00:30:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.100 "oacs": { 00:30:01.100 "security": 0, 00:30:01.100 "format": 0, 00:30:01.100 "firmware": 0, 00:30:01.100 "ns_manage": 0 00:30:01.100 }, 00:30:01.100 "multi_ctrlr": true, 00:30:01.100 "ana_reporting": false 00:30:01.100 }, 00:30:01.100 "vs": { 00:30:01.100 "nvme_version": "1.3" 00:30:01.100 }, 00:30:01.100 "ns_data": { 00:30:01.100 "id": 1, 00:30:01.100 "can_share": true 00:30:01.100 } 00:30:01.100 } 00:30:01.100 ], 00:30:01.100 "mp_policy": "active_passive" 00:30:01.100 } 00:30:01.100 } 00:30:01.100 ] 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.100 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.101 [2024-07-13 05:19:07.422766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.101 [2024-07-13 05:19:07.422918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:30:01.101 [2024-07-13 05:19:07.555121] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.101 [ 00:30:01.101 { 00:30:01.101 "name": "nvme0n1", 00:30:01.101 "aliases": [ 00:30:01.101 "65a04432-ce92-47f8-9f54-8d085688bf26" 00:30:01.101 ], 00:30:01.101 "product_name": "NVMe disk", 00:30:01.101 "block_size": 512, 00:30:01.101 "num_blocks": 2097152, 00:30:01.101 "uuid": "65a04432-ce92-47f8-9f54-8d085688bf26", 00:30:01.101 "assigned_rate_limits": { 00:30:01.101 "rw_ios_per_sec": 0, 00:30:01.101 "rw_mbytes_per_sec": 0, 00:30:01.101 "r_mbytes_per_sec": 0, 00:30:01.101 "w_mbytes_per_sec": 0 00:30:01.101 }, 00:30:01.101 "claimed": false, 00:30:01.101 "zoned": false, 00:30:01.101 "supported_io_types": { 00:30:01.101 "read": true, 00:30:01.101 "write": true, 00:30:01.101 "unmap": false, 00:30:01.101 "flush": true, 00:30:01.101 "reset": true, 00:30:01.101 "nvme_admin": true, 00:30:01.101 "nvme_io": true, 00:30:01.101 "nvme_io_md": false, 00:30:01.101 "write_zeroes": true, 00:30:01.101 "zcopy": false, 00:30:01.101 "get_zone_info": false, 00:30:01.101 "zone_management": false, 00:30:01.101 "zone_append": false, 00:30:01.101 "compare": true, 00:30:01.101 "compare_and_write": true, 00:30:01.101 "abort": true, 00:30:01.101 "seek_hole": false, 00:30:01.101 "seek_data": false, 00:30:01.101 "copy": true, 00:30:01.101 "nvme_iov_md": false 00:30:01.101 }, 00:30:01.101 "memory_domains": [ 00:30:01.101 { 00:30:01.101 "dma_device_id": "system", 00:30:01.101 "dma_device_type": 1 00:30:01.101 } 00:30:01.101 ], 00:30:01.101 "driver_specific": { 00:30:01.101 "nvme": [ 00:30:01.101 { 00:30:01.101 "trid": { 00:30:01.101 "trtype": "TCP", 00:30:01.101 "adrfam": "IPv4", 00:30:01.101 "traddr": "10.0.0.2", 00:30:01.101 "trsvcid": "4420", 00:30:01.101 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:01.101 }, 00:30:01.101 "ctrlr_data": { 00:30:01.101 "cntlid": 2, 00:30:01.101 "vendor_id": "0x8086", 00:30:01.101 "model_number": "SPDK bdev Controller", 00:30:01.101 "serial_number": "00000000000000000000", 00:30:01.101 "firmware_revision": "24.09", 00:30:01.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.101 "oacs": { 00:30:01.101 "security": 0, 00:30:01.101 "format": 0, 00:30:01.101 "firmware": 0, 00:30:01.101 "ns_manage": 0 00:30:01.101 }, 00:30:01.101 "multi_ctrlr": true, 00:30:01.101 "ana_reporting": false 00:30:01.101 }, 00:30:01.101 "vs": { 00:30:01.101 "nvme_version": "1.3" 00:30:01.101 }, 00:30:01.101 "ns_data": { 00:30:01.101 "id": 1, 00:30:01.101 "can_share": true 00:30:01.101 } 00:30:01.101 } 00:30:01.101 ], 00:30:01.101 "mp_policy": "active_passive" 00:30:01.101 } 00:30:01.101 } 00:30:01.101 ] 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aRS8ys7Ih1 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:01.101 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aRS8ys7Ih1 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.389 [2024-07-13 05:19:07.611513] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:01.389 [2024-07-13 05:19:07.611780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aRS8ys7Ih1 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.389 [2024-07-13 05:19:07.619477] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aRS8ys7Ih1 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.389 [2024-07-13 05:19:07.627507] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:01.389 [2024-07-13 05:19:07.627637] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:01.389 nvme0n1 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.389 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.389 [ 00:30:01.389 { 00:30:01.389 "name": "nvme0n1", 00:30:01.389 "aliases": [ 00:30:01.389 "65a04432-ce92-47f8-9f54-8d085688bf26" 00:30:01.389 ], 00:30:01.389 "product_name": "NVMe disk", 00:30:01.389 "block_size": 512, 00:30:01.389 "num_blocks": 2097152, 00:30:01.389 "uuid": "65a04432-ce92-47f8-9f54-8d085688bf26", 00:30:01.389 "assigned_rate_limits": { 00:30:01.389 "rw_ios_per_sec": 0, 00:30:01.389 "rw_mbytes_per_sec": 0, 00:30:01.389 "r_mbytes_per_sec": 0, 00:30:01.389 "w_mbytes_per_sec": 0 00:30:01.389 }, 00:30:01.389 "claimed": false, 00:30:01.389 "zoned": false, 00:30:01.389 "supported_io_types": { 00:30:01.389 "read": true, 00:30:01.389 "write": true, 00:30:01.389 "unmap": false, 00:30:01.389 "flush": true, 00:30:01.389 "reset": true, 00:30:01.389 "nvme_admin": true, 00:30:01.389 "nvme_io": true, 00:30:01.389 "nvme_io_md": false, 00:30:01.389 "write_zeroes": true, 00:30:01.389 "zcopy": false, 00:30:01.389 "get_zone_info": false, 00:30:01.389 "zone_management": false, 00:30:01.389 "zone_append": false, 00:30:01.389 "compare": true, 00:30:01.389 "compare_and_write": true, 00:30:01.389 "abort": true, 00:30:01.389 "seek_hole": false, 00:30:01.389 "seek_data": false, 00:30:01.389 "copy": true, 00:30:01.389 "nvme_iov_md": false 00:30:01.389 }, 00:30:01.389 "memory_domains": [ 00:30:01.389 { 00:30:01.389 "dma_device_id": "system", 00:30:01.389 "dma_device_type": 1 00:30:01.389 } 00:30:01.389 ], 00:30:01.389 "driver_specific": { 00:30:01.389 "nvme": [ 00:30:01.389 { 00:30:01.389 "trid": { 00:30:01.389 "trtype": "TCP", 00:30:01.389 "adrfam": "IPv4", 00:30:01.389 "traddr": "10.0.0.2", 00:30:01.389 "trsvcid": "4421", 00:30:01.389 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:01.389 }, 00:30:01.389 "ctrlr_data": { 00:30:01.389 "cntlid": 3, 00:30:01.389 "vendor_id": "0x8086", 00:30:01.389 "model_number": "SPDK bdev Controller", 00:30:01.389 "serial_number": "00000000000000000000", 00:30:01.389 "firmware_revision": "24.09", 00:30:01.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.389 "oacs": { 00:30:01.389 "security": 0, 00:30:01.389 "format": 0, 00:30:01.389 "firmware": 0, 00:30:01.389 "ns_manage": 0 00:30:01.389 }, 00:30:01.389 "multi_ctrlr": true, 00:30:01.390 "ana_reporting": false 00:30:01.390 }, 00:30:01.390 "vs": { 00:30:01.390 "nvme_version": "1.3" 00:30:01.390 }, 00:30:01.390 "ns_data": { 00:30:01.390 "id": 1, 00:30:01.390 "can_share": true 00:30:01.390 } 00:30:01.390 } 00:30:01.390 ], 00:30:01.390 "mp_policy": "active_passive" 00:30:01.390 } 00:30:01.390 } 00:30:01.390 ] 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.aRS8ys7Ih1 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:01.390 rmmod nvme_tcp 00:30:01.390 rmmod nvme_fabrics 00:30:01.390 rmmod nvme_keyring 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 798282 ']' 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 798282 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 798282 ']' 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 798282 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 798282 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 798282' 00:30:01.390 killing process with pid 798282 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 798282 00:30:01.390 [2024-07-13 05:19:07.817364] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:01.390 05:19:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 798282 00:30:01.390 [2024-07-13 05:19:07.817421] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.765 05:19:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.292 05:19:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.292 00:30:05.292 real 0m7.417s 00:30:05.292 user 0m4.029s 00:30:05.292 sys 0m2.050s 00:30:05.292 05:19:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.292 05:19:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:05.292 ************************************ 00:30:05.292 END TEST nvmf_async_init 00:30:05.292 ************************************ 00:30:05.292 05:19:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:05.292 05:19:11 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:05.292 05:19:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:05.292 05:19:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.292 05:19:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.292 ************************************ 00:30:05.292 START TEST dma 00:30:05.292 ************************************ 00:30:05.292 05:19:11 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:05.292 * Looking for test storage... 00:30:05.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:05.292 05:19:11 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.292 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.292 05:19:11 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.292 05:19:11 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.292 05:19:11 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.292 05:19:11 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.292 05:19:11 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.292 05:19:11 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:05.293 05:19:11 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.293 05:19:11 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.293 05:19:11 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:05.293 05:19:11 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:05.293 00:30:05.293 real 0m0.068s 00:30:05.293 user 0m0.029s 00:30:05.293 sys 0m0.044s 00:30:05.293 05:19:11 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.293 05:19:11 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:05.293 ************************************ 00:30:05.293 END TEST dma 00:30:05.293 ************************************ 00:30:05.293 05:19:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:05.293 05:19:11 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:05.293 05:19:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:05.293 05:19:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.293 05:19:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.293 ************************************ 00:30:05.293 START TEST nvmf_identify 00:30:05.293 ************************************ 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:05.293 * Looking for test storage... 00:30:05.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.293 05:19:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:07.193 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:07.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:07.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:07.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:07.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:07.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:30:07.194 00:30:07.194 --- 10.0.0.2 ping statistics --- 00:30:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.194 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:07.194 00:30:07.194 --- 10.0.0.1 ping statistics --- 00:30:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.194 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=800660 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 800660 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 800660 ']' 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.194 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.195 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.195 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.195 05:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:07.195 [2024-07-13 05:19:13.678936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:07.195 [2024-07-13 05:19:13.679131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.453 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.453 [2024-07-13 05:19:13.826394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.711 [2024-07-13 05:19:14.091915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.711 [2024-07-13 05:19:14.091974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.711 [2024-07-13 05:19:14.092013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.711 [2024-07-13 05:19:14.092032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.711 [2024-07-13 05:19:14.092051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.711 [2024-07-13 05:19:14.092170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.711 [2024-07-13 05:19:14.092246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.711 [2024-07-13 05:19:14.092298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.711 [2024-07-13 05:19:14.092307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 [2024-07-13 05:19:14.622358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 Malloc0 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 [2024-07-13 05:19:14.744682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.277 [ 00:30:08.277 { 00:30:08.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:08.277 "subtype": "Discovery", 00:30:08.277 "listen_addresses": [ 00:30:08.277 { 00:30:08.277 "trtype": "TCP", 00:30:08.277 "adrfam": "IPv4", 00:30:08.277 "traddr": "10.0.0.2", 00:30:08.277 "trsvcid": "4420" 00:30:08.277 } 00:30:08.277 ], 00:30:08.277 "allow_any_host": true, 00:30:08.277 "hosts": [] 00:30:08.277 }, 00:30:08.277 { 00:30:08.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.277 "subtype": "NVMe", 00:30:08.277 "listen_addresses": [ 00:30:08.277 { 00:30:08.277 "trtype": "TCP", 00:30:08.277 "adrfam": "IPv4", 00:30:08.277 "traddr": "10.0.0.2", 00:30:08.277 "trsvcid": "4420" 00:30:08.277 } 00:30:08.277 ], 00:30:08.277 "allow_any_host": true, 00:30:08.277 "hosts": [], 00:30:08.277 "serial_number": "SPDK00000000000001", 00:30:08.277 "model_number": "SPDK bdev Controller", 00:30:08.277 "max_namespaces": 32, 00:30:08.277 "min_cntlid": 1, 00:30:08.277 "max_cntlid": 65519, 00:30:08.277 "namespaces": [ 00:30:08.277 { 00:30:08.277 "nsid": 1, 00:30:08.277 "bdev_name": "Malloc0", 00:30:08.277 "name": "Malloc0", 00:30:08.277 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:08.277 "eui64": "ABCDEF0123456789", 00:30:08.277 "uuid": "9ca9ea67-7703-4bdd-aa19-2338d0c5a764" 00:30:08.277 } 00:30:08.277 ] 00:30:08.277 } 00:30:08.277 ] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.277 05:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:08.537 [2024-07-13 05:19:14.807355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:08.537 [2024-07-13 05:19:14.807446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800817 ] 00:30:08.537 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.537 [2024-07-13 05:19:14.864269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:08.537 [2024-07-13 05:19:14.864388] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:08.538 [2024-07-13 05:19:14.864409] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:08.538 [2024-07-13 05:19:14.864440] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:08.538 [2024-07-13 05:19:14.864463] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:08.538 [2024-07-13 05:19:14.867943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:08.538 [2024-07-13 05:19:14.868017] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:08.538 [2024-07-13 05:19:14.868240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:08.538 [2024-07-13 05:19:14.868272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:08.538 [2024-07-13 05:19:14.868288] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:08.538 [2024-07-13 05:19:14.868305] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:08.538 [2024-07-13 05:19:14.868382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.868409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.868426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.868464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:08.538 [2024-07-13 05:19:14.868524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.874904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.874932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.874945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.874959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.874996] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:08.538 [2024-07-13 05:19:14.875021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:08.538 [2024-07-13 05:19:14.875039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:08.538 [2024-07-13 05:19:14.875082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.875133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.875169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.875385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.875410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.875423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.875452] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:08.538 [2024-07-13 05:19:14.875494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:08.538 [2024-07-13 05:19:14.875523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.875586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.875617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.875810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.875834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.875846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.875882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:08.538 [2024-07-13 05:19:14.875924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.875963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.875993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.876012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.876043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.876233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.876260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.876277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.876305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.876334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.876397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.876442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.876650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.876675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.876687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.876713] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:08.538 [2024-07-13 05:19:14.876734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.876762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.876898] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:08.538 [2024-07-13 05:19:14.876929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.876953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.876977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.876995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.877026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.877210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.877235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.877247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.877262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.877278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:08.538 [2024-07-13 05:19:14.877312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.877345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.877357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.877380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.877426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.877641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.877665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.877677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.877688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.877703] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:08.538 [2024-07-13 05:19:14.877730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:08.538 [2024-07-13 05:19:14.877768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:08.538 [2024-07-13 05:19:14.877795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:08.538 [2024-07-13 05:19:14.877837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.877856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.538 [2024-07-13 05:19:14.877900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.538 [2024-07-13 05:19:14.877934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.538 [2024-07-13 05:19:14.878170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.538 [2024-07-13 05:19:14.878201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.538 [2024-07-13 05:19:14.878224] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.878256] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:08.538 [2024-07-13 05:19:14.878270] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.538 [2024-07-13 05:19:14.878282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.878328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.878349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.878426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.538 [2024-07-13 05:19:14.878451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.538 [2024-07-13 05:19:14.878462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.538 [2024-07-13 05:19:14.878473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.538 [2024-07-13 05:19:14.878504] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:08.538 [2024-07-13 05:19:14.878521] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:08.538 [2024-07-13 05:19:14.878558] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:08.538 [2024-07-13 05:19:14.878574] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:08.538 [2024-07-13 05:19:14.878590] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:08.538 [2024-07-13 05:19:14.878604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:08.538 [2024-07-13 05:19:14.878643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:08.539 [2024-07-13 05:19:14.878674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.878687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.878698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.878721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:08.539 [2024-07-13 05:19:14.878755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.539 [2024-07-13 05:19:14.882897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.882929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.882942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.882953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.882973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.882986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.882997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.539 [2024-07-13 05:19:14.883032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.539 [2024-07-13 05:19:14.883085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.539 [2024-07-13 05:19:14.883141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.539 [2024-07-13 05:19:14.883206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:08.539 [2024-07-13 05:19:14.883235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:08.539 [2024-07-13 05:19:14.883264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.539 [2024-07-13 05:19:14.883352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.539 [2024-07-13 05:19:14.883372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:08.539 [2024-07-13 05:19:14.883384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:08.539 [2024-07-13 05:19:14.883396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.539 [2024-07-13 05:19:14.883407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.539 [2024-07-13 05:19:14.883616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.883654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.883666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.883692] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:08.539 [2024-07-13 05:19:14.883721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:08.539 [2024-07-13 05:19:14.883760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.883778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.883796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.539 [2024-07-13 05:19:14.883826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.539 [2024-07-13 05:19:14.884092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.539 [2024-07-13 05:19:14.884117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.539 [2024-07-13 05:19:14.884137] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:08.539 [2024-07-13 05:19:14.884189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.539 [2024-07-13 05:19:14.884203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884222] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884240] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.884278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.884289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.884338] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:08.539 [2024-07-13 05:19:14.884418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.884468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.539 [2024-07-13 05:19:14.884503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.884531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.884547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.539 [2024-07-13 05:19:14.884578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.539 [2024-07-13 05:19:14.884611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.539 [2024-07-13 05:19:14.885024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.539 [2024-07-13 05:19:14.885049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.539 [2024-07-13 05:19:14.885061] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.885077] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:08.539 [2024-07-13 05:19:14.885091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:08.539 [2024-07-13 05:19:14.885106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.885123] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.885136] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.885154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.885186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.885196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.885207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.926060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.926091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.926107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.926156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.926197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.539 [2024-07-13 05:19:14.926251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.539 [2024-07-13 05:19:14.926445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.539 [2024-07-13 05:19:14.926476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.539 [2024-07-13 05:19:14.926499] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926515] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:08.539 [2024-07-13 05:19:14.926528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:08.539 [2024-07-13 05:19:14.926539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926581] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.926656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.926668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.539 [2024-07-13 05:19:14.926707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.926723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.539 [2024-07-13 05:19:14.926754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.539 [2024-07-13 05:19:14.926827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.539 [2024-07-13 05:19:14.930905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.539 [2024-07-13 05:19:14.930929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.539 [2024-07-13 05:19:14.930941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.930951] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:08.539 [2024-07-13 05:19:14.930963] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:08.539 [2024-07-13 05:19:14.930973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.930989] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.931001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.970917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.539 [2024-07-13 05:19:14.970947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.539 [2024-07-13 05:19:14.970960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.539 [2024-07-13 05:19:14.970987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.539 ===================================================== 00:30:08.540 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:08.540 ===================================================== 00:30:08.540 Controller Capabilities/Features 00:30:08.540 ================================ 00:30:08.540 Vendor ID: 0000 00:30:08.540 Subsystem Vendor ID: 0000 00:30:08.540 Serial Number: .................... 00:30:08.540 Model Number: ........................................ 00:30:08.540 Firmware Version: 24.09 00:30:08.540 Recommended Arb Burst: 0 00:30:08.540 IEEE OUI Identifier: 00 00 00 00:30:08.540 Multi-path I/O 00:30:08.540 May have multiple subsystem ports: No 00:30:08.540 May have multiple controllers: No 00:30:08.540 Associated with SR-IOV VF: No 00:30:08.540 Max Data Transfer Size: 131072 00:30:08.540 Max Number of Namespaces: 0 00:30:08.540 Max Number of I/O Queues: 1024 00:30:08.540 NVMe Specification Version (VS): 1.3 00:30:08.540 NVMe Specification Version (Identify): 1.3 00:30:08.540 Maximum Queue Entries: 128 00:30:08.540 Contiguous Queues Required: Yes 00:30:08.540 Arbitration Mechanisms Supported 00:30:08.540 Weighted Round Robin: Not Supported 00:30:08.540 Vendor Specific: Not Supported 00:30:08.540 Reset Timeout: 15000 ms 00:30:08.540 Doorbell Stride: 4 bytes 00:30:08.540 NVM Subsystem Reset: Not Supported 00:30:08.540 Command Sets Supported 00:30:08.540 NVM Command Set: Supported 00:30:08.540 Boot Partition: Not Supported 00:30:08.540 Memory Page Size Minimum: 4096 bytes 00:30:08.540 Memory Page Size Maximum: 4096 bytes 00:30:08.540 Persistent Memory Region: Not Supported 00:30:08.540 Optional Asynchronous Events Supported 00:30:08.540 Namespace Attribute Notices: Not Supported 00:30:08.540 Firmware Activation Notices: Not Supported 00:30:08.540 ANA Change Notices: Not Supported 00:30:08.540 PLE Aggregate Log Change Notices: Not Supported 00:30:08.540 LBA Status Info Alert Notices: Not Supported 00:30:08.540 EGE Aggregate Log Change Notices: Not Supported 00:30:08.540 Normal NVM Subsystem Shutdown event: Not Supported 00:30:08.540 Zone Descriptor Change Notices: Not Supported 00:30:08.540 Discovery Log Change Notices: Supported 00:30:08.540 Controller Attributes 00:30:08.540 128-bit Host Identifier: Not Supported 00:30:08.540 Non-Operational Permissive Mode: Not Supported 00:30:08.540 NVM Sets: Not Supported 00:30:08.540 Read Recovery Levels: Not Supported 00:30:08.540 Endurance Groups: Not Supported 00:30:08.540 Predictable Latency Mode: Not Supported 00:30:08.540 Traffic Based Keep ALive: Not Supported 00:30:08.540 Namespace Granularity: Not Supported 00:30:08.540 SQ Associations: Not Supported 00:30:08.540 UUID List: Not Supported 00:30:08.540 Multi-Domain Subsystem: Not Supported 00:30:08.540 Fixed Capacity Management: Not Supported 00:30:08.540 Variable Capacity Management: Not Supported 00:30:08.540 Delete Endurance Group: Not Supported 00:30:08.540 Delete NVM Set: Not Supported 00:30:08.540 Extended LBA Formats Supported: Not Supported 00:30:08.540 Flexible Data Placement Supported: Not Supported 00:30:08.540 00:30:08.540 Controller Memory Buffer Support 00:30:08.540 ================================ 00:30:08.540 Supported: No 00:30:08.540 00:30:08.540 Persistent Memory Region Support 00:30:08.540 ================================ 00:30:08.540 Supported: No 00:30:08.540 00:30:08.540 Admin Command Set Attributes 00:30:08.540 ============================ 00:30:08.540 Security Send/Receive: Not Supported 00:30:08.540 Format NVM: Not Supported 00:30:08.540 Firmware Activate/Download: Not Supported 00:30:08.540 Namespace Management: Not Supported 00:30:08.540 Device Self-Test: Not Supported 00:30:08.540 Directives: Not Supported 00:30:08.540 NVMe-MI: Not Supported 00:30:08.540 Virtualization Management: Not Supported 00:30:08.540 Doorbell Buffer Config: Not Supported 00:30:08.540 Get LBA Status Capability: Not Supported 00:30:08.540 Command & Feature Lockdown Capability: Not Supported 00:30:08.540 Abort Command Limit: 1 00:30:08.540 Async Event Request Limit: 4 00:30:08.540 Number of Firmware Slots: N/A 00:30:08.540 Firmware Slot 1 Read-Only: N/A 00:30:08.540 Firmware Activation Without Reset: N/A 00:30:08.540 Multiple Update Detection Support: N/A 00:30:08.540 Firmware Update Granularity: No Information Provided 00:30:08.540 Per-Namespace SMART Log: No 00:30:08.540 Asymmetric Namespace Access Log Page: Not Supported 00:30:08.540 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:08.540 Command Effects Log Page: Not Supported 00:30:08.540 Get Log Page Extended Data: Supported 00:30:08.540 Telemetry Log Pages: Not Supported 00:30:08.540 Persistent Event Log Pages: Not Supported 00:30:08.540 Supported Log Pages Log Page: May Support 00:30:08.540 Commands Supported & Effects Log Page: Not Supported 00:30:08.540 Feature Identifiers & Effects Log Page:May Support 00:30:08.540 NVMe-MI Commands & Effects Log Page: May Support 00:30:08.540 Data Area 4 for Telemetry Log: Not Supported 00:30:08.540 Error Log Page Entries Supported: 128 00:30:08.540 Keep Alive: Not Supported 00:30:08.540 00:30:08.540 NVM Command Set Attributes 00:30:08.540 ========================== 00:30:08.540 Submission Queue Entry Size 00:30:08.540 Max: 1 00:30:08.540 Min: 1 00:30:08.540 Completion Queue Entry Size 00:30:08.540 Max: 1 00:30:08.540 Min: 1 00:30:08.540 Number of Namespaces: 0 00:30:08.540 Compare Command: Not Supported 00:30:08.540 Write Uncorrectable Command: Not Supported 00:30:08.540 Dataset Management Command: Not Supported 00:30:08.540 Write Zeroes Command: Not Supported 00:30:08.540 Set Features Save Field: Not Supported 00:30:08.540 Reservations: Not Supported 00:30:08.540 Timestamp: Not Supported 00:30:08.540 Copy: Not Supported 00:30:08.540 Volatile Write Cache: Not Present 00:30:08.540 Atomic Write Unit (Normal): 1 00:30:08.540 Atomic Write Unit (PFail): 1 00:30:08.540 Atomic Compare & Write Unit: 1 00:30:08.540 Fused Compare & Write: Supported 00:30:08.540 Scatter-Gather List 00:30:08.540 SGL Command Set: Supported 00:30:08.540 SGL Keyed: Supported 00:30:08.540 SGL Bit Bucket Descriptor: Not Supported 00:30:08.540 SGL Metadata Pointer: Not Supported 00:30:08.540 Oversized SGL: Not Supported 00:30:08.540 SGL Metadata Address: Not Supported 00:30:08.540 SGL Offset: Supported 00:30:08.540 Transport SGL Data Block: Not Supported 00:30:08.540 Replay Protected Memory Block: Not Supported 00:30:08.540 00:30:08.540 Firmware Slot Information 00:30:08.540 ========================= 00:30:08.540 Active slot: 0 00:30:08.540 00:30:08.540 00:30:08.540 Error Log 00:30:08.540 ========= 00:30:08.540 00:30:08.540 Active Namespaces 00:30:08.540 ================= 00:30:08.540 Discovery Log Page 00:30:08.540 ================== 00:30:08.540 Generation Counter: 2 00:30:08.540 Number of Records: 2 00:30:08.540 Record Format: 0 00:30:08.540 00:30:08.540 Discovery Log Entry 0 00:30:08.540 ---------------------- 00:30:08.540 Transport Type: 3 (TCP) 00:30:08.540 Address Family: 1 (IPv4) 00:30:08.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:08.540 Entry Flags: 00:30:08.540 Duplicate Returned Information: 1 00:30:08.540 Explicit Persistent Connection Support for Discovery: 1 00:30:08.540 Transport Requirements: 00:30:08.540 Secure Channel: Not Required 00:30:08.540 Port ID: 0 (0x0000) 00:30:08.540 Controller ID: 65535 (0xffff) 00:30:08.540 Admin Max SQ Size: 128 00:30:08.540 Transport Service Identifier: 4420 00:30:08.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:08.540 Transport Address: 10.0.0.2 00:30:08.540 Discovery Log Entry 1 00:30:08.540 ---------------------- 00:30:08.540 Transport Type: 3 (TCP) 00:30:08.540 Address Family: 1 (IPv4) 00:30:08.540 Subsystem Type: 2 (NVM Subsystem) 00:30:08.540 Entry Flags: 00:30:08.540 Duplicate Returned Information: 0 00:30:08.540 Explicit Persistent Connection Support for Discovery: 0 00:30:08.540 Transport Requirements: 00:30:08.540 Secure Channel: Not Required 00:30:08.540 Port ID: 0 (0x0000) 00:30:08.540 Controller ID: 65535 (0xffff) 00:30:08.540 Admin Max SQ Size: 128 00:30:08.540 Transport Service Identifier: 4420 00:30:08.540 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:08.540 Transport Address: 10.0.0.2 [2024-07-13 05:19:14.971175] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:08.540 [2024-07-13 05:19:14.971209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.540 [2024-07-13 05:19:14.971235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.540 [2024-07-13 05:19:14.971250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:08.540 [2024-07-13 05:19:14.971280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.540 [2024-07-13 05:19:14.971293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:08.540 [2024-07-13 05:19:14.971306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.540 [2024-07-13 05:19:14.971318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.540 [2024-07-13 05:19:14.971346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.540 [2024-07-13 05:19:14.971372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.540 [2024-07-13 05:19:14.971386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.540 [2024-07-13 05:19:14.971397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.540 [2024-07-13 05:19:14.971420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.540 [2024-07-13 05:19:14.971457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.540 [2024-07-13 05:19:14.971651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.540 [2024-07-13 05:19:14.971676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.971689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.971701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.971722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.971736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.971747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.971781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.971847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.972096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.972122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.972135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.972160] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:08.541 [2024-07-13 05:19:14.972174] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:08.541 [2024-07-13 05:19:14.972226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.972271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.972301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.972498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.972521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.972533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.972573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.972634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.972664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.972862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.972897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.972910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.972949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.972977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.972999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.973031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.973231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.973254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.973269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.973309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.973371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.973415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.973630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.973655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.973667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.973707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.973734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.973752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.973797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.974008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.974042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.974056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.974096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.974156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.974186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.974404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.974427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.974438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.974478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.974520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.974547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.974593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.974816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.974839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.974852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.978880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.978920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.978937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.978948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.541 [2024-07-13 05:19:14.978966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.541 [2024-07-13 05:19:14.978996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.541 [2024-07-13 05:19:14.979171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.541 [2024-07-13 05:19:14.979197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.541 [2024-07-13 05:19:14.979209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.541 [2024-07-13 05:19:14.979220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.541 [2024-07-13 05:19:14.979244] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:30:08.541 00:30:08.803 05:19:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:08.803 [2024-07-13 05:19:15.079782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:08.803 [2024-07-13 05:19:15.079892] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800826 ] 00:30:08.803 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.803 [2024-07-13 05:19:15.136064] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:08.803 [2024-07-13 05:19:15.136210] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:08.803 [2024-07-13 05:19:15.136232] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:08.803 [2024-07-13 05:19:15.136265] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:08.803 [2024-07-13 05:19:15.136289] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:08.803 [2024-07-13 05:19:15.139947] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:08.803 [2024-07-13 05:19:15.140035] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:08.803 [2024-07-13 05:19:15.140224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:08.803 [2024-07-13 05:19:15.140255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:08.803 [2024-07-13 05:19:15.140272] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:08.803 [2024-07-13 05:19:15.140287] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:08.803 [2024-07-13 05:19:15.140363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.140389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.140406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.140445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:08.803 [2024-07-13 05:19:15.140506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.147907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.147936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.147949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.147963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.147996] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:08.803 [2024-07-13 05:19:15.148020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:08.803 [2024-07-13 05:19:15.148037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:08.803 [2024-07-13 05:19:15.148071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.148124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.803 [2024-07-13 05:19:15.148161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.148331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.148354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.148367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.148395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:08.803 [2024-07-13 05:19:15.148418] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:08.803 [2024-07-13 05:19:15.148444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.148507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.803 [2024-07-13 05:19:15.148540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.148740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.148761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.148772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.148798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:08.803 [2024-07-13 05:19:15.148821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:08.803 [2024-07-13 05:19:15.148862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.148900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.148941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.803 [2024-07-13 05:19:15.148978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.149149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.149170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.149182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.149208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:08.803 [2024-07-13 05:19:15.149235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.149295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.803 [2024-07-13 05:19:15.149346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.149520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.149541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.149552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.149578] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:08.803 [2024-07-13 05:19:15.149593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:08.803 [2024-07-13 05:19:15.149620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:08.803 [2024-07-13 05:19:15.149752] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:08.803 [2024-07-13 05:19:15.149765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:08.803 [2024-07-13 05:19:15.149788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.149812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.803 [2024-07-13 05:19:15.149851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.803 [2024-07-13 05:19:15.149902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.803 [2024-07-13 05:19:15.150055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.803 [2024-07-13 05:19:15.150077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.803 [2024-07-13 05:19:15.150089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.803 [2024-07-13 05:19:15.150106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.803 [2024-07-13 05:19:15.150122] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:08.804 [2024-07-13 05:19:15.150153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.150177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.150189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.150208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.804 [2024-07-13 05:19:15.150254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.804 [2024-07-13 05:19:15.150429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.150451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.150463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.150474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.804 [2024-07-13 05:19:15.150488] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:08.804 [2024-07-13 05:19:15.150516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.150540] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:08.804 [2024-07-13 05:19:15.150565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.150609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.150628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.150650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.804 [2024-07-13 05:19:15.150686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.804 [2024-07-13 05:19:15.150922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.804 [2024-07-13 05:19:15.150946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.804 [2024-07-13 05:19:15.150958] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.150970] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:08.804 [2024-07-13 05:19:15.150984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.804 [2024-07-13 05:19:15.150996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151017] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.151119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.151130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.804 [2024-07-13 05:19:15.151172] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:08.804 [2024-07-13 05:19:15.151205] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:08.804 [2024-07-13 05:19:15.151219] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:08.804 [2024-07-13 05:19:15.151233] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:08.804 [2024-07-13 05:19:15.151259] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:08.804 [2024-07-13 05:19:15.151275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.151303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.151329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.151376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:08.804 [2024-07-13 05:19:15.151408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.804 [2024-07-13 05:19:15.151593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.151619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.151630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.804 [2024-07-13 05:19:15.151666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.151738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.804 [2024-07-13 05:19:15.151765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.151803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.804 [2024-07-13 05:19:15.151819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.151857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.155897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.804 [2024-07-13 05:19:15.155937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.155961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.155973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.155989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.804 [2024-07-13 05:19:15.156004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.156048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.156077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.156090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.156110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.804 [2024-07-13 05:19:15.156149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:08.804 [2024-07-13 05:19:15.156168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:08.804 [2024-07-13 05:19:15.156181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:08.804 [2024-07-13 05:19:15.156198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.804 [2024-07-13 05:19:15.156211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.804 [2024-07-13 05:19:15.156435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.156472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.156483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.156494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.804 [2024-07-13 05:19:15.156511] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:08.804 [2024-07-13 05:19:15.156526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.156547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.156576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.156598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.156612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.156623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.156642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:08.804 [2024-07-13 05:19:15.156674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.804 [2024-07-13 05:19:15.156851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.156882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.156895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.156905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.804 [2024-07-13 05:19:15.157008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.157045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:08.804 [2024-07-13 05:19:15.157072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.157086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.804 [2024-07-13 05:19:15.157106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.804 [2024-07-13 05:19:15.157145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.804 [2024-07-13 05:19:15.157375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.804 [2024-07-13 05:19:15.157397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.804 [2024-07-13 05:19:15.157408] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.157419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:08.804 [2024-07-13 05:19:15.157435] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.804 [2024-07-13 05:19:15.157447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.157465] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.157482] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.804 [2024-07-13 05:19:15.157501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.804 [2024-07-13 05:19:15.157518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.804 [2024-07-13 05:19:15.157529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.157539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.157585] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:08.805 [2024-07-13 05:19:15.157619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.157658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.157699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.157714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.157740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.157773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.805 [2024-07-13 05:19:15.158020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.805 [2024-07-13 05:19:15.158043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.805 [2024-07-13 05:19:15.158054] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158065] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:08.805 [2024-07-13 05:19:15.158077] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.805 [2024-07-13 05:19:15.158088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158105] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158122] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.158164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.158176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.158228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.158343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.158375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.805 [2024-07-13 05:19:15.158574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.805 [2024-07-13 05:19:15.158596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.805 [2024-07-13 05:19:15.158607] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158617] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:08.805 [2024-07-13 05:19:15.158629] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.805 [2024-07-13 05:19:15.158640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158657] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158682] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.158722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.158735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.158746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.158774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158913] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:08.805 [2024-07-13 05:19:15.158925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:08.805 [2024-07-13 05:19:15.158939] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:08.805 [2024-07-13 05:19:15.158993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.159034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.159059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.159101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.805 [2024-07-13 05:19:15.159134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.805 [2024-07-13 05:19:15.159153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.805 [2024-07-13 05:19:15.159394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.159418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.159449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.159489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.159505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.159516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.159555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.159588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.159618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.805 [2024-07-13 05:19:15.159809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.159835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.159847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.159858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.163919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.163938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.163973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.805 [2024-07-13 05:19:15.164181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.164203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.164215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.164251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.164288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.805 [2024-07-13 05:19:15.164502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.805 [2024-07-13 05:19:15.164524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.805 [2024-07-13 05:19:15.164540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.805 [2024-07-13 05:19:15.164594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.164632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.164708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.164766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.805 [2024-07-13 05:19:15.164810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:08.805 [2024-07-13 05:19:15.164828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.805 [2024-07-13 05:19:15.164881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:08.805 [2024-07-13 05:19:15.164926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:08.805 [2024-07-13 05:19:15.164939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:08.805 [2024-07-13 05:19:15.164951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:08.805 [2024-07-13 05:19:15.165239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.806 [2024-07-13 05:19:15.165267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.806 [2024-07-13 05:19:15.165296] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165307] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:08.806 [2024-07-13 05:19:15.165320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:08.806 [2024-07-13 05:19:15.165332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165363] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165379] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.806 [2024-07-13 05:19:15.165415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.806 [2024-07-13 05:19:15.165426] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165443] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:08.806 [2024-07-13 05:19:15.165456] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:08.806 [2024-07-13 05:19:15.165468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.806 [2024-07-13 05:19:15.165531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.806 [2024-07-13 05:19:15.165542] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165552] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:08.806 [2024-07-13 05:19:15.165564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:08.806 [2024-07-13 05:19:15.165575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165602] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165616] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:08.806 [2024-07-13 05:19:15.165659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:08.806 [2024-07-13 05:19:15.165670] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165679] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:08.806 [2024-07-13 05:19:15.165691] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:08.806 [2024-07-13 05:19:15.165701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165731] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165743] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.806 [2024-07-13 05:19:15.165781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.806 [2024-07-13 05:19:15.165791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:08.806 [2024-07-13 05:19:15.165839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.806 [2024-07-13 05:19:15.165856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.806 [2024-07-13 05:19:15.165890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:08.806 [2024-07-13 05:19:15.165948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.806 [2024-07-13 05:19:15.165965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.806 [2024-07-13 05:19:15.165979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.165990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:08.806 [2024-07-13 05:19:15.166012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.806 [2024-07-13 05:19:15.166029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.806 [2024-07-13 05:19:15.166039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.806 [2024-07-13 05:19:15.166049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:08.806 ===================================================== 00:30:08.806 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.806 ===================================================== 00:30:08.806 Controller Capabilities/Features 00:30:08.806 ================================ 00:30:08.806 Vendor ID: 8086 00:30:08.806 Subsystem Vendor ID: 8086 00:30:08.806 Serial Number: SPDK00000000000001 00:30:08.806 Model Number: SPDK bdev Controller 00:30:08.806 Firmware Version: 24.09 00:30:08.806 Recommended Arb Burst: 6 00:30:08.806 IEEE OUI Identifier: e4 d2 5c 00:30:08.806 Multi-path I/O 00:30:08.806 May have multiple subsystem ports: Yes 00:30:08.806 May have multiple controllers: Yes 00:30:08.806 Associated with SR-IOV VF: No 00:30:08.806 Max Data Transfer Size: 131072 00:30:08.806 Max Number of Namespaces: 32 00:30:08.806 Max Number of I/O Queues: 127 00:30:08.806 NVMe Specification Version (VS): 1.3 00:30:08.806 NVMe Specification Version (Identify): 1.3 00:30:08.806 Maximum Queue Entries: 128 00:30:08.806 Contiguous Queues Required: Yes 00:30:08.806 Arbitration Mechanisms Supported 00:30:08.806 Weighted Round Robin: Not Supported 00:30:08.806 Vendor Specific: Not Supported 00:30:08.806 Reset Timeout: 15000 ms 00:30:08.806 Doorbell Stride: 4 bytes 00:30:08.806 NVM Subsystem Reset: Not Supported 00:30:08.806 Command Sets Supported 00:30:08.806 NVM Command Set: Supported 00:30:08.806 Boot Partition: Not Supported 00:30:08.806 Memory Page Size Minimum: 4096 bytes 00:30:08.806 Memory Page Size Maximum: 4096 bytes 00:30:08.806 Persistent Memory Region: Not Supported 00:30:08.806 Optional Asynchronous Events Supported 00:30:08.806 Namespace Attribute Notices: Supported 00:30:08.806 Firmware Activation Notices: Not Supported 00:30:08.806 ANA Change Notices: Not Supported 00:30:08.806 PLE Aggregate Log Change Notices: Not Supported 00:30:08.806 LBA Status Info Alert Notices: Not Supported 00:30:08.806 EGE Aggregate Log Change Notices: Not Supported 00:30:08.806 Normal NVM Subsystem Shutdown event: Not Supported 00:30:08.806 Zone Descriptor Change Notices: Not Supported 00:30:08.806 Discovery Log Change Notices: Not Supported 00:30:08.806 Controller Attributes 00:30:08.806 128-bit Host Identifier: Supported 00:30:08.806 Non-Operational Permissive Mode: Not Supported 00:30:08.806 NVM Sets: Not Supported 00:30:08.806 Read Recovery Levels: Not Supported 00:30:08.806 Endurance Groups: Not Supported 00:30:08.806 Predictable Latency Mode: Not Supported 00:30:08.806 Traffic Based Keep ALive: Not Supported 00:30:08.806 Namespace Granularity: Not Supported 00:30:08.806 SQ Associations: Not Supported 00:30:08.806 UUID List: Not Supported 00:30:08.806 Multi-Domain Subsystem: Not Supported 00:30:08.806 Fixed Capacity Management: Not Supported 00:30:08.806 Variable Capacity Management: Not Supported 00:30:08.806 Delete Endurance Group: Not Supported 00:30:08.806 Delete NVM Set: Not Supported 00:30:08.806 Extended LBA Formats Supported: Not Supported 00:30:08.806 Flexible Data Placement Supported: Not Supported 00:30:08.806 00:30:08.806 Controller Memory Buffer Support 00:30:08.806 ================================ 00:30:08.806 Supported: No 00:30:08.806 00:30:08.806 Persistent Memory Region Support 00:30:08.806 ================================ 00:30:08.806 Supported: No 00:30:08.806 00:30:08.806 Admin Command Set Attributes 00:30:08.806 ============================ 00:30:08.806 Security Send/Receive: Not Supported 00:30:08.806 Format NVM: Not Supported 00:30:08.806 Firmware Activate/Download: Not Supported 00:30:08.806 Namespace Management: Not Supported 00:30:08.806 Device Self-Test: Not Supported 00:30:08.806 Directives: Not Supported 00:30:08.806 NVMe-MI: Not Supported 00:30:08.806 Virtualization Management: Not Supported 00:30:08.806 Doorbell Buffer Config: Not Supported 00:30:08.806 Get LBA Status Capability: Not Supported 00:30:08.806 Command & Feature Lockdown Capability: Not Supported 00:30:08.806 Abort Command Limit: 4 00:30:08.806 Async Event Request Limit: 4 00:30:08.806 Number of Firmware Slots: N/A 00:30:08.806 Firmware Slot 1 Read-Only: N/A 00:30:08.806 Firmware Activation Without Reset: N/A 00:30:08.806 Multiple Update Detection Support: N/A 00:30:08.806 Firmware Update Granularity: No Information Provided 00:30:08.806 Per-Namespace SMART Log: No 00:30:08.806 Asymmetric Namespace Access Log Page: Not Supported 00:30:08.806 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:08.806 Command Effects Log Page: Supported 00:30:08.806 Get Log Page Extended Data: Supported 00:30:08.806 Telemetry Log Pages: Not Supported 00:30:08.806 Persistent Event Log Pages: Not Supported 00:30:08.806 Supported Log Pages Log Page: May Support 00:30:08.806 Commands Supported & Effects Log Page: Not Supported 00:30:08.806 Feature Identifiers & Effects Log Page:May Support 00:30:08.806 NVMe-MI Commands & Effects Log Page: May Support 00:30:08.806 Data Area 4 for Telemetry Log: Not Supported 00:30:08.806 Error Log Page Entries Supported: 128 00:30:08.806 Keep Alive: Supported 00:30:08.806 Keep Alive Granularity: 10000 ms 00:30:08.806 00:30:08.806 NVM Command Set Attributes 00:30:08.806 ========================== 00:30:08.806 Submission Queue Entry Size 00:30:08.806 Max: 64 00:30:08.806 Min: 64 00:30:08.806 Completion Queue Entry Size 00:30:08.806 Max: 16 00:30:08.806 Min: 16 00:30:08.806 Number of Namespaces: 32 00:30:08.807 Compare Command: Supported 00:30:08.807 Write Uncorrectable Command: Not Supported 00:30:08.807 Dataset Management Command: Supported 00:30:08.807 Write Zeroes Command: Supported 00:30:08.807 Set Features Save Field: Not Supported 00:30:08.807 Reservations: Supported 00:30:08.807 Timestamp: Not Supported 00:30:08.807 Copy: Supported 00:30:08.807 Volatile Write Cache: Present 00:30:08.807 Atomic Write Unit (Normal): 1 00:30:08.807 Atomic Write Unit (PFail): 1 00:30:08.807 Atomic Compare & Write Unit: 1 00:30:08.807 Fused Compare & Write: Supported 00:30:08.807 Scatter-Gather List 00:30:08.807 SGL Command Set: Supported 00:30:08.807 SGL Keyed: Supported 00:30:08.807 SGL Bit Bucket Descriptor: Not Supported 00:30:08.807 SGL Metadata Pointer: Not Supported 00:30:08.807 Oversized SGL: Not Supported 00:30:08.807 SGL Metadata Address: Not Supported 00:30:08.807 SGL Offset: Supported 00:30:08.807 Transport SGL Data Block: Not Supported 00:30:08.807 Replay Protected Memory Block: Not Supported 00:30:08.807 00:30:08.807 Firmware Slot Information 00:30:08.807 ========================= 00:30:08.807 Active slot: 1 00:30:08.807 Slot 1 Firmware Revision: 24.09 00:30:08.807 00:30:08.807 00:30:08.807 Commands Supported and Effects 00:30:08.807 ============================== 00:30:08.807 Admin Commands 00:30:08.807 -------------- 00:30:08.807 Get Log Page (02h): Supported 00:30:08.807 Identify (06h): Supported 00:30:08.807 Abort (08h): Supported 00:30:08.807 Set Features (09h): Supported 00:30:08.807 Get Features (0Ah): Supported 00:30:08.807 Asynchronous Event Request (0Ch): Supported 00:30:08.807 Keep Alive (18h): Supported 00:30:08.807 I/O Commands 00:30:08.807 ------------ 00:30:08.807 Flush (00h): Supported LBA-Change 00:30:08.807 Write (01h): Supported LBA-Change 00:30:08.807 Read (02h): Supported 00:30:08.807 Compare (05h): Supported 00:30:08.807 Write Zeroes (08h): Supported LBA-Change 00:30:08.807 Dataset Management (09h): Supported LBA-Change 00:30:08.807 Copy (19h): Supported LBA-Change 00:30:08.807 00:30:08.807 Error Log 00:30:08.807 ========= 00:30:08.807 00:30:08.807 Arbitration 00:30:08.807 =========== 00:30:08.807 Arbitration Burst: 1 00:30:08.807 00:30:08.807 Power Management 00:30:08.807 ================ 00:30:08.807 Number of Power States: 1 00:30:08.807 Current Power State: Power State #0 00:30:08.807 Power State #0: 00:30:08.807 Max Power: 0.00 W 00:30:08.807 Non-Operational State: Operational 00:30:08.807 Entry Latency: Not Reported 00:30:08.807 Exit Latency: Not Reported 00:30:08.807 Relative Read Throughput: 0 00:30:08.807 Relative Read Latency: 0 00:30:08.807 Relative Write Throughput: 0 00:30:08.807 Relative Write Latency: 0 00:30:08.807 Idle Power: Not Reported 00:30:08.807 Active Power: Not Reported 00:30:08.807 Non-Operational Permissive Mode: Not Supported 00:30:08.807 00:30:08.807 Health Information 00:30:08.807 ================== 00:30:08.807 Critical Warnings: 00:30:08.807 Available Spare Space: OK 00:30:08.807 Temperature: OK 00:30:08.807 Device Reliability: OK 00:30:08.807 Read Only: No 00:30:08.807 Volatile Memory Backup: OK 00:30:08.807 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:08.807 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:08.807 Available Spare: 0% 00:30:08.807 Available Spare Threshold: 0% 00:30:08.807 Life Percentage Used:[2024-07-13 05:19:15.166290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.166310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:08.807 [2024-07-13 05:19:15.166329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.807 [2024-07-13 05:19:15.166362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:08.807 [2024-07-13 05:19:15.166544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.807 [2024-07-13 05:19:15.166572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.807 [2024-07-13 05:19:15.166586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.166597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.166683] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:08.807 [2024-07-13 05:19:15.166731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.166760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.807 [2024-07-13 05:19:15.166788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.166805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.807 [2024-07-13 05:19:15.166818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.166831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.807 [2024-07-13 05:19:15.166843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.166884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.807 [2024-07-13 05:19:15.166909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.166935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.166948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.807 [2024-07-13 05:19:15.166968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.807 [2024-07-13 05:19:15.167020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.807 [2024-07-13 05:19:15.167202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.807 [2024-07-13 05:19:15.167226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.807 [2024-07-13 05:19:15.167238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.167278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.807 [2024-07-13 05:19:15.167323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.807 [2024-07-13 05:19:15.167376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.807 [2024-07-13 05:19:15.167601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.807 [2024-07-13 05:19:15.167624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.807 [2024-07-13 05:19:15.167635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.807 [2024-07-13 05:19:15.167667] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:08.807 [2024-07-13 05:19:15.167682] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:08.807 [2024-07-13 05:19:15.167711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.807 [2024-07-13 05:19:15.167758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.807 [2024-07-13 05:19:15.167777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.807 [2024-07-13 05:19:15.167808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.807 [2024-07-13 05:19:15.171916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.807 [2024-07-13 05:19:15.171945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.808 [2024-07-13 05:19:15.171958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.808 [2024-07-13 05:19:15.171972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.808 [2024-07-13 05:19:15.172017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:08.808 [2024-07-13 05:19:15.172034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:08.808 [2024-07-13 05:19:15.172045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:08.808 [2024-07-13 05:19:15.172063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.808 [2024-07-13 05:19:15.172096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:08.808 [2024-07-13 05:19:15.172255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:08.808 [2024-07-13 05:19:15.172276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:08.808 [2024-07-13 05:19:15.172288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:08.808 [2024-07-13 05:19:15.172298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:08.808 [2024-07-13 05:19:15.172320] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:30:08.808 0% 00:30:08.808 Data Units Read: 0 00:30:08.808 Data Units Written: 0 00:30:08.808 Host Read Commands: 0 00:30:08.808 Host Write Commands: 0 00:30:08.808 Controller Busy Time: 0 minutes 00:30:08.808 Power Cycles: 0 00:30:08.808 Power On Hours: 0 hours 00:30:08.808 Unsafe Shutdowns: 0 00:30:08.808 Unrecoverable Media Errors: 0 00:30:08.808 Lifetime Error Log Entries: 0 00:30:08.808 Warning Temperature Time: 0 minutes 00:30:08.808 Critical Temperature Time: 0 minutes 00:30:08.808 00:30:08.808 Number of Queues 00:30:08.808 ================ 00:30:08.808 Number of I/O Submission Queues: 127 00:30:08.808 Number of I/O Completion Queues: 127 00:30:08.808 00:30:08.808 Active Namespaces 00:30:08.808 ================= 00:30:08.808 Namespace ID:1 00:30:08.808 Error Recovery Timeout: Unlimited 00:30:08.808 Command Set Identifier: NVM (00h) 00:30:08.808 Deallocate: Supported 00:30:08.808 Deallocated/Unwritten Error: Not Supported 00:30:08.808 Deallocated Read Value: Unknown 00:30:08.808 Deallocate in Write Zeroes: Not Supported 00:30:08.808 Deallocated Guard Field: 0xFFFF 00:30:08.808 Flush: Supported 00:30:08.808 Reservation: Supported 00:30:08.808 Namespace Sharing Capabilities: Multiple Controllers 00:30:08.808 Size (in LBAs): 131072 (0GiB) 00:30:08.808 Capacity (in LBAs): 131072 (0GiB) 00:30:08.808 Utilization (in LBAs): 131072 (0GiB) 00:30:08.808 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:08.808 EUI64: ABCDEF0123456789 00:30:08.808 UUID: 9ca9ea67-7703-4bdd-aa19-2338d0c5a764 00:30:08.808 Thin Provisioning: Not Supported 00:30:08.808 Per-NS Atomic Units: Yes 00:30:08.808 Atomic Boundary Size (Normal): 0 00:30:08.808 Atomic Boundary Size (PFail): 0 00:30:08.808 Atomic Boundary Offset: 0 00:30:08.808 Maximum Single Source Range Length: 65535 00:30:08.808 Maximum Copy Length: 65535 00:30:08.808 Maximum Source Range Count: 1 00:30:08.808 NGUID/EUI64 Never Reused: No 00:30:08.808 Namespace Write Protected: No 00:30:08.808 Number of LBA Formats: 1 00:30:08.808 Current LBA Format: LBA Format #00 00:30:08.808 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:08.808 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.808 rmmod nvme_tcp 00:30:08.808 rmmod nvme_fabrics 00:30:08.808 rmmod nvme_keyring 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 800660 ']' 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 800660 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 800660 ']' 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 800660 00:30:08.808 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800660 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800660' 00:30:09.066 killing process with pid 800660 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 800660 00:30:09.066 05:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 800660 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.440 05:19:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.342 05:19:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.342 00:30:12.342 real 0m7.460s 00:30:12.342 user 0m10.088s 00:30:12.342 sys 0m2.144s 00:30:12.342 05:19:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.342 05:19:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:12.342 ************************************ 00:30:12.342 END TEST nvmf_identify 00:30:12.342 ************************************ 00:30:12.600 05:19:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:12.600 05:19:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:12.600 05:19:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:12.600 05:19:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.600 05:19:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.600 ************************************ 00:30:12.600 START TEST nvmf_perf 00:30:12.600 ************************************ 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:12.600 * Looking for test storage... 00:30:12.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.600 05:19:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.601 05:19:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:14.499 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:14.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:30:14.500 00:30:14.500 --- 10.0.0.2 ping statistics --- 00:30:14.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.500 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:30:14.500 00:30:14.500 --- 10.0.0.1 ping statistics --- 00:30:14.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.500 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.500 05:19:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=802887 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 802887 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 802887 ']' 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:14.760 05:19:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.760 [2024-07-13 05:19:21.091650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:14.760 [2024-07-13 05:19:21.091803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.760 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.760 [2024-07-13 05:19:21.232816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.018 [2024-07-13 05:19:21.472782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.018 [2024-07-13 05:19:21.472878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.018 [2024-07-13 05:19:21.472906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.018 [2024-07-13 05:19:21.472935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.018 [2024-07-13 05:19:21.472955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.018 [2024-07-13 05:19:21.473063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.018 [2024-07-13 05:19:21.473131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.018 [2024-07-13 05:19:21.473182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.018 [2024-07-13 05:19:21.473192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:15.584 05:19:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:18.894 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:18.894 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:19.152 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:19.152 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.409 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:19.409 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:19.409 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:19.409 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:19.409 05:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:19.672 [2024-07-13 05:19:26.042672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.672 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.932 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:19.932 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.189 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:20.189 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:20.447 05:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.704 [2024-07-13 05:19:27.044121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.704 05:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.962 05:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:20.962 05:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:20.962 05:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:20.962 05:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:22.332 Initializing NVMe Controllers 00:30:22.332 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:22.332 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:22.332 Initialization complete. Launching workers. 00:30:22.332 ======================================================== 00:30:22.332 Latency(us) 00:30:22.332 Device Information : IOPS MiB/s Average min max 00:30:22.332 PCIE (0000:88:00.0) NSID 1 from core 0: 73023.35 285.25 437.30 43.05 7304.84 00:30:22.332 ======================================================== 00:30:22.332 Total : 73023.35 285.25 437.30 43.05 7304.84 00:30:22.332 00:30:22.332 05:19:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.591 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.963 Initializing NVMe Controllers 00:30:23.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.963 Initialization complete. Launching workers. 00:30:23.963 ======================================================== 00:30:23.963 Latency(us) 00:30:23.963 Device Information : IOPS MiB/s Average min max 00:30:23.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.00 0.28 13898.07 226.34 47884.36 00:30:23.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 16053.99 5980.87 47939.03 00:30:23.963 ======================================================== 00:30:23.963 Total : 136.00 0.53 14912.62 226.34 47939.03 00:30:23.963 00:30:23.963 05:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.963 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.335 Initializing NVMe Controllers 00:30:25.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:25.335 Initialization complete. Launching workers. 00:30:25.335 ======================================================== 00:30:25.335 Latency(us) 00:30:25.335 Device Information : IOPS MiB/s Average min max 00:30:25.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5052.40 19.74 6350.83 1189.67 43881.92 00:30:25.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.10 15.06 8326.77 4120.71 15979.40 00:30:25.335 ======================================================== 00:30:25.335 Total : 8908.50 34.80 7206.13 1189.67 43881.92 00:30:25.335 00:30:25.593 05:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:25.593 05:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:25.593 05:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.593 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.116 Initializing NVMe Controllers 00:30:28.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.116 Controller IO queue size 128, less than required. 00:30:28.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.116 Controller IO queue size 128, less than required. 00:30:28.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:28.116 Initialization complete. Launching workers. 00:30:28.116 ======================================================== 00:30:28.116 Latency(us) 00:30:28.116 Device Information : IOPS MiB/s Average min max 00:30:28.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1087.45 271.86 123944.64 78784.66 304309.96 00:30:28.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 519.98 129.99 258421.09 129207.97 511532.73 00:30:28.116 ======================================================== 00:30:28.116 Total : 1607.42 401.86 167445.58 78784.66 511532.73 00:30:28.116 00:30:28.375 05:19:34 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:28.375 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.633 No valid NVMe controllers or AIO or URING devices found 00:30:28.633 Initializing NVMe Controllers 00:30:28.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.633 Controller IO queue size 128, less than required. 00:30:28.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.633 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:28.633 Controller IO queue size 128, less than required. 00:30:28.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.633 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:28.633 WARNING: Some requested NVMe devices were skipped 00:30:28.633 05:19:34 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:28.633 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.914 Initializing NVMe Controllers 00:30:31.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.914 Controller IO queue size 128, less than required. 00:30:31.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:31.914 Controller IO queue size 128, less than required. 00:30:31.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:31.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:31.914 Initialization complete. Launching workers. 00:30:31.914 00:30:31.914 ==================== 00:30:31.914 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:31.914 TCP transport: 00:30:31.914 polls: 8310 00:30:31.914 idle_polls: 3593 00:30:31.914 sock_completions: 4717 00:30:31.914 nvme_completions: 4225 00:30:31.914 submitted_requests: 6324 00:30:31.914 queued_requests: 1 00:30:31.914 00:30:31.914 ==================== 00:30:31.914 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:31.914 TCP transport: 00:30:31.914 polls: 11151 00:30:31.914 idle_polls: 6268 00:30:31.914 sock_completions: 4883 00:30:31.914 nvme_completions: 4949 00:30:31.914 submitted_requests: 7416 00:30:31.914 queued_requests: 1 00:30:31.914 ======================================================== 00:30:31.914 Latency(us) 00:30:31.914 Device Information : IOPS MiB/s Average min max 00:30:31.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1055.74 263.94 126943.17 78444.53 295326.94 00:30:31.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1236.70 309.18 107547.57 65332.92 407335.86 00:30:31.914 ======================================================== 00:30:31.914 Total : 2292.45 573.11 116479.86 65332.92 407335.86 00:30:31.914 00:30:31.914 05:19:38 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:31.914 05:19:38 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.914 05:19:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:31.914 05:19:38 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:31.914 05:19:38 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a5b33e96-1d2f-4300-af52-d87738d0a7e6 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a5b33e96-1d2f-4300-af52-d87738d0a7e6 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a5b33e96-1d2f-4300-af52-d87738d0a7e6 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:36.094 05:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:36.094 { 00:30:36.094 "uuid": "a5b33e96-1d2f-4300-af52-d87738d0a7e6", 00:30:36.094 "name": "lvs_0", 00:30:36.094 "base_bdev": "Nvme0n1", 00:30:36.094 "total_data_clusters": 238234, 00:30:36.094 "free_clusters": 238234, 00:30:36.094 "block_size": 512, 00:30:36.094 "cluster_size": 4194304 00:30:36.094 } 00:30:36.094 ]' 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a5b33e96-1d2f-4300-af52-d87738d0a7e6") .free_clusters' 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a5b33e96-1d2f-4300-af52-d87738d0a7e6") .cluster_size' 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:36.094 952936 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5b33e96-1d2f-4300-af52-d87738d0a7e6 lbd_0 20480 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=0db949e8-1bb3-46e9-b81d-91cfc87fb895 00:30:36.094 05:19:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0db949e8-1bb3-46e9-b81d-91cfc87fb895 lvs_n_0 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d9dbde74-d373-4bd8-87ce-af2146ca5d81 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d9dbde74-d373-4bd8-87ce-af2146ca5d81 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d9dbde74-d373-4bd8-87ce-af2146ca5d81 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:37.023 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:37.279 { 00:30:37.279 "uuid": "a5b33e96-1d2f-4300-af52-d87738d0a7e6", 00:30:37.279 "name": "lvs_0", 00:30:37.279 "base_bdev": "Nvme0n1", 00:30:37.279 "total_data_clusters": 238234, 00:30:37.279 "free_clusters": 233114, 00:30:37.279 "block_size": 512, 00:30:37.279 "cluster_size": 4194304 00:30:37.279 }, 00:30:37.279 { 00:30:37.279 "uuid": "d9dbde74-d373-4bd8-87ce-af2146ca5d81", 00:30:37.279 "name": "lvs_n_0", 00:30:37.279 "base_bdev": "0db949e8-1bb3-46e9-b81d-91cfc87fb895", 00:30:37.279 "total_data_clusters": 5114, 00:30:37.279 "free_clusters": 5114, 00:30:37.279 "block_size": 512, 00:30:37.279 "cluster_size": 4194304 00:30:37.279 } 00:30:37.279 ]' 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d9dbde74-d373-4bd8-87ce-af2146ca5d81") .free_clusters' 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d9dbde74-d373-4bd8-87ce-af2146ca5d81") .cluster_size' 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:37.279 20456 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:37.279 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9dbde74-d373-4bd8-87ce-af2146ca5d81 lbd_nest_0 20456 00:30:37.536 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=67be60c7-92ee-40d4-9e01-f26c5a3ac927 00:30:37.536 05:19:43 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.793 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:37.793 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 67be60c7-92ee-40d4-9e01-f26c5a3ac927 00:30:38.051 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.309 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:38.309 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:38.309 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:38.309 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:38.309 05:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:38.309 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.495 Initializing NVMe Controllers 00:30:50.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:50.495 Initialization complete. Launching workers. 00:30:50.495 ======================================================== 00:30:50.495 Latency(us) 00:30:50.495 Device Information : IOPS MiB/s Average min max 00:30:50.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.60 0.02 21474.35 257.81 45898.99 00:30:50.495 ======================================================== 00:30:50.495 Total : 46.60 0.02 21474.35 257.81 45898.99 00:30:50.495 00:30:50.495 05:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:50.495 05:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:50.495 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.522 Initializing NVMe Controllers 00:31:00.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.522 Initialization complete. Launching workers. 00:31:00.522 ======================================================== 00:31:00.522 Latency(us) 00:31:00.522 Device Information : IOPS MiB/s Average min max 00:31:00.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.90 10.36 12079.13 4850.19 47907.46 00:31:00.522 ======================================================== 00:31:00.522 Total : 82.90 10.36 12079.13 4850.19 47907.46 00:31:00.522 00:31:00.522 05:20:05 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:00.522 05:20:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:00.522 05:20:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.522 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.480 Initializing NVMe Controllers 00:31:10.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.480 Initialization complete. Launching workers. 00:31:10.480 ======================================================== 00:31:10.480 Latency(us) 00:31:10.480 Device Information : IOPS MiB/s Average min max 00:31:10.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4682.76 2.29 6836.68 640.51 42108.18 00:31:10.480 ======================================================== 00:31:10.480 Total : 4682.76 2.29 6836.68 640.51 42108.18 00:31:10.480 00:31:10.480 05:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.480 05:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.480 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.480 Initializing NVMe Controllers 00:31:20.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.480 Initialization complete. Launching workers. 00:31:20.480 ======================================================== 00:31:20.480 Latency(us) 00:31:20.480 Device Information : IOPS MiB/s Average min max 00:31:20.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2403.00 300.37 13331.18 849.00 29640.47 00:31:20.480 ======================================================== 00:31:20.480 Total : 2403.00 300.37 13331.18 849.00 29640.47 00:31:20.480 00:31:20.480 05:20:26 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:20.480 05:20:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.480 05:20:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.674 Initializing NVMe Controllers 00:31:32.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.674 Controller IO queue size 128, less than required. 00:31:32.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:32.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.674 Initialization complete. Launching workers. 00:31:32.674 ======================================================== 00:31:32.674 Latency(us) 00:31:32.674 Device Information : IOPS MiB/s Average min max 00:31:32.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8446.70 4.12 15162.90 1847.74 38245.77 00:31:32.674 ======================================================== 00:31:32.674 Total : 8446.70 4.12 15162.90 1847.74 38245.77 00:31:32.674 00:31:32.674 05:20:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:32.674 05:20:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.674 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.641 Initializing NVMe Controllers 00:31:42.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.641 Controller IO queue size 128, less than required. 00:31:42.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.641 Initialization complete. Launching workers. 00:31:42.641 ======================================================== 00:31:42.641 Latency(us) 00:31:42.641 Device Information : IOPS MiB/s Average min max 00:31:42.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.42 150.55 106425.94 31229.01 215240.06 00:31:42.641 ======================================================== 00:31:42.641 Total : 1204.42 150.55 106425.94 31229.01 215240.06 00:31:42.641 00:31:42.641 05:20:47 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.641 05:20:47 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67be60c7-92ee-40d4-9e01-f26c5a3ac927 00:31:42.641 05:20:48 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:42.641 05:20:49 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0db949e8-1bb3-46e9-b81d-91cfc87fb895 00:31:42.899 05:20:49 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.155 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:43.155 rmmod nvme_tcp 00:31:43.155 rmmod nvme_fabrics 00:31:43.413 rmmod nvme_keyring 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 802887 ']' 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 802887 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 802887 ']' 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 802887 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802887 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802887' 00:31:43.413 killing process with pid 802887 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 802887 00:31:43.413 05:20:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 802887 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.936 05:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.836 05:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:47.836 00:31:47.836 real 1m35.452s 00:31:47.836 user 5m52.873s 00:31:47.836 sys 0m15.376s 00:31:47.836 05:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.836 05:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.836 ************************************ 00:31:47.836 END TEST nvmf_perf 00:31:47.836 ************************************ 00:31:48.094 05:20:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:48.094 05:20:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:48.094 05:20:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:48.094 05:20:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.094 05:20:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.094 ************************************ 00:31:48.094 START TEST nvmf_fio_host 00:31:48.094 ************************************ 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:48.094 * Looking for test storage... 00:31:48.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.094 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:48.095 05:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:49.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:49.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:49.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:49.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:49.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:31:49.994 00:31:49.994 --- 10.0.0.2 ping statistics --- 00:31:49.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.994 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:49.994 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:31:49.995 00:31:49.995 --- 10.0.0.1 ping statistics --- 00:31:49.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.995 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=815402 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 815402 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 815402 ']' 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:49.995 05:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.995 [2024-07-13 05:20:56.478298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:49.995 [2024-07-13 05:20:56.478444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.253 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.253 [2024-07-13 05:20:56.612172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.511 [2024-07-13 05:20:56.840193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.511 [2024-07-13 05:20:56.840270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.511 [2024-07-13 05:20:56.840309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.511 [2024-07-13 05:20:56.840327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.511 [2024-07-13 05:20:56.840347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.511 [2024-07-13 05:20:56.840483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.511 [2024-07-13 05:20:56.840547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.511 [2024-07-13 05:20:56.840589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.511 [2024-07-13 05:20:56.840600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.076 05:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:51.076 05:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:31:51.076 05:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:51.334 [2024-07-13 05:20:57.700349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.334 05:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:51.334 05:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:51.334 05:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.334 05:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:51.591 Malloc1 00:31:51.591 05:20:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:51.849 05:20:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:52.106 05:20:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:52.364 [2024-07-13 05:20:58.777068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.364 05:20:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:52.621 05:20:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.879 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:52.879 fio-3.35 00:31:52.879 Starting 1 thread 00:31:52.879 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.406 00:31:55.406 test: (groupid=0, jobs=1): err= 0: pid=815836: Sat Jul 13 05:21:01 2024 00:31:55.406 read: IOPS=5623, BW=22.0MiB/s (23.0MB/s)(44.1MiB/2009msec) 00:31:55.406 slat (usec): min=2, max=296, avg= 3.93, stdev= 3.65 00:31:55.406 clat (usec): min=4286, max=19464, avg=12470.67, stdev=1052.44 00:31:55.406 lat (usec): min=4334, max=19468, avg=12474.60, stdev=1052.27 00:31:55.406 clat percentiles (usec): 00:31:55.406 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:31:55.406 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:31:55.406 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:31:55.406 | 99.00th=[14746], 99.50th=[15139], 99.90th=[17957], 99.95th=[18220], 00:31:55.406 | 99.99th=[18482] 00:31:55.406 bw ( KiB/s): min=21072, max=23232, per=99.87%, avg=22466.00, stdev=969.03, samples=4 00:31:55.406 iops : min= 5268, max= 5808, avg=5616.50, stdev=242.26, samples=4 00:31:55.406 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(43.8MiB/2009msec); 0 zone resets 00:31:55.406 slat (usec): min=3, max=147, avg= 4.16, stdev= 2.61 00:31:55.406 clat (usec): min=1910, max=18359, avg=10222.77, stdev=927.85 00:31:55.406 lat (usec): min=1926, max=18363, avg=10226.93, stdev=927.81 00:31:55.406 clat percentiles (usec): 00:31:55.406 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:31:55.406 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:31:55.406 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:31:55.406 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17433], 99.95th=[17957], 00:31:55.406 | 99.99th=[18220] 00:31:55.406 bw ( KiB/s): min=22104, max=22528, per=99.90%, avg=22326.00, stdev=173.48, samples=4 00:31:55.406 iops : min= 5526, max= 5632, avg=5581.50, stdev=43.37, samples=4 00:31:55.406 lat (msec) : 2=0.01%, 4=0.04%, 10=20.32%, 20=79.63% 00:31:55.406 cpu : usr=63.94%, sys=32.77%, ctx=58, majf=0, minf=1538 00:31:55.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:55.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.406 issued rwts: total=11298,11225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.406 00:31:55.406 Run status group 0 (all jobs): 00:31:55.406 READ: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=44.1MiB (46.3MB), run=2009-2009msec 00:31:55.406 WRITE: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=43.8MiB (46.0MB), run=2009-2009msec 00:31:55.664 ----------------------------------------------------- 00:31:55.664 Suppressions used: 00:31:55.664 count bytes template 00:31:55.664 1 57 /usr/src/fio/parse.c 00:31:55.664 1 8 libtcmalloc_minimal.so 00:31:55.664 ----------------------------------------------------- 00:31:55.664 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:55.664 05:21:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:55.952 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:55.952 fio-3.35 00:31:55.952 Starting 1 thread 00:31:55.952 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.480 00:31:58.480 test: (groupid=0, jobs=1): err= 0: pid=816402: Sat Jul 13 05:21:04 2024 00:31:58.480 read: IOPS=6183, BW=96.6MiB/s (101MB/s)(194MiB/2009msec) 00:31:58.480 slat (usec): min=3, max=117, avg= 5.02, stdev= 2.07 00:31:58.480 clat (usec): min=2775, max=57490, avg=12218.95, stdev=4732.46 00:31:58.480 lat (usec): min=2780, max=57495, avg=12223.97, stdev=4732.56 00:31:58.480 clat percentiles (usec): 00:31:58.480 | 1.00th=[ 6587], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[ 9765], 00:31:58.480 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12387], 00:31:58.480 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15401], 95.00th=[16712], 00:31:58.480 | 99.00th=[44827], 99.50th=[51119], 99.90th=[55837], 99.95th=[55837], 00:31:58.480 | 99.99th=[56361] 00:31:58.480 bw ( KiB/s): min=37216, max=57856, per=49.78%, avg=49248.00, stdev=10240.70, samples=4 00:31:58.480 iops : min= 2326, max= 3616, avg=3078.00, stdev=640.04, samples=4 00:31:58.480 write: IOPS=3630, BW=56.7MiB/s (59.5MB/s)(101MiB/1786msec); 0 zone resets 00:31:58.480 slat (usec): min=33, max=161, avg=37.22, stdev= 5.65 00:31:58.480 clat (usec): min=7365, max=27026, avg=15221.94, stdev=2611.54 00:31:58.480 lat (usec): min=7402, max=27069, avg=15259.16, stdev=2611.39 00:31:58.480 clat percentiles (usec): 00:31:58.480 | 1.00th=[10159], 5.00th=[11600], 10.00th=[12256], 20.00th=[13042], 00:31:58.480 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14877], 60.00th=[15533], 00:31:58.480 | 70.00th=[16319], 80.00th=[17433], 90.00th=[18744], 95.00th=[19792], 00:31:58.480 | 99.00th=[21890], 99.50th=[23462], 99.90th=[26346], 99.95th=[26608], 00:31:58.480 | 99.99th=[27132] 00:31:58.480 bw ( KiB/s): min=38272, max=60384, per=88.21%, avg=51240.00, stdev=10778.87, samples=4 00:31:58.480 iops : min= 2392, max= 3774, avg=3202.50, stdev=673.68, samples=4 00:31:58.480 lat (msec) : 4=0.03%, 10=15.85%, 20=81.73%, 50=1.96%, 100=0.43% 00:31:58.480 cpu : usr=75.36%, sys=22.05%, ctx=39, majf=0, minf=2079 00:31:58.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:31:58.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:58.480 issued rwts: total=12423,6484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:58.480 00:31:58.480 Run status group 0 (all jobs): 00:31:58.480 READ: bw=96.6MiB/s (101MB/s), 96.6MiB/s-96.6MiB/s (101MB/s-101MB/s), io=194MiB (204MB), run=2009-2009msec 00:31:58.480 WRITE: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=101MiB (106MB), run=1786-1786msec 00:31:58.480 ----------------------------------------------------- 00:31:58.480 Suppressions used: 00:31:58.480 count bytes template 00:31:58.480 1 57 /usr/src/fio/parse.c 00:31:58.480 265 25440 /usr/src/fio/iolog.c 00:31:58.480 1 8 libtcmalloc_minimal.so 00:31:58.480 ----------------------------------------------------- 00:31:58.480 00:31:58.480 05:21:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:58.737 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:58.738 05:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:31:58.738 05:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:02.012 Nvme0n1 00:32:02.012 05:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:05.292 { 00:32:05.292 "uuid": "aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31", 00:32:05.292 "name": "lvs_0", 00:32:05.292 "base_bdev": "Nvme0n1", 00:32:05.292 "total_data_clusters": 930, 00:32:05.292 "free_clusters": 930, 00:32:05.292 "block_size": 512, 00:32:05.292 "cluster_size": 1073741824 00:32:05.292 } 00:32:05.292 ]' 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31") .free_clusters' 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31") .cluster_size' 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:05.292 952320 00:32:05.292 05:21:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:05.550 aefdc988-a883-4a2a-8120-5ac596be43ba 00:32:05.550 05:21:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:05.807 05:21:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:06.065 05:21:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:06.323 05:21:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.323 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.323 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:06.324 05:21:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:06.580 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:06.580 fio-3.35 00:32:06.580 Starting 1 thread 00:32:06.580 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.101 00:32:09.101 test: (groupid=0, jobs=1): err= 0: pid=818178: Sat Jul 13 05:21:15 2024 00:32:09.101 read: IOPS=4470, BW=17.5MiB/s (18.3MB/s)(35.1MiB/2009msec) 00:32:09.101 slat (usec): min=2, max=196, avg= 3.88, stdev= 2.85 00:32:09.101 clat (usec): min=1258, max=172772, avg=15668.16, stdev=13087.14 00:32:09.101 lat (usec): min=1262, max=172826, avg=15672.04, stdev=13087.58 00:32:09.101 clat percentiles (msec): 00:32:09.101 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:09.101 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:09.101 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:09.101 | 99.00th=[ 21], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:09.101 | 99.99th=[ 174] 00:32:09.101 bw ( KiB/s): min=12688, max=19752, per=99.51%, avg=17796.00, stdev=3410.06, samples=4 00:32:09.101 iops : min= 3172, max= 4938, avg=4449.00, stdev=852.51, samples=4 00:32:09.101 write: IOPS=4459, BW=17.4MiB/s (18.3MB/s)(35.0MiB/2009msec); 0 zone resets 00:32:09.101 slat (usec): min=3, max=130, avg= 4.10, stdev= 1.94 00:32:09.101 clat (usec): min=445, max=170273, avg=12780.28, stdev=12353.52 00:32:09.101 lat (usec): min=449, max=170284, avg=12784.38, stdev=12353.97 00:32:09.101 clat percentiles (msec): 00:32:09.101 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:09.101 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:32:09.101 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:09.101 | 99.00th=[ 15], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:09.101 | 99.99th=[ 171] 00:32:09.101 bw ( KiB/s): min=13352, max=19520, per=99.97%, avg=17834.00, stdev=2991.34, samples=4 00:32:09.101 iops : min= 3338, max= 4880, avg=4458.50, stdev=747.83, samples=4 00:32:09.101 lat (usec) : 500=0.01%, 1000=0.02% 00:32:09.101 lat (msec) : 2=0.02%, 4=0.08%, 10=2.22%, 20=96.77%, 50=0.17% 00:32:09.101 lat (msec) : 250=0.71% 00:32:09.101 cpu : usr=64.19%, sys=33.07%, ctx=98, majf=0, minf=1533 00:32:09.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:09.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.101 issued rwts: total=8982,8960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.101 00:32:09.101 Run status group 0 (all jobs): 00:32:09.101 READ: bw=17.5MiB/s (18.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=35.1MiB (36.8MB), run=2009-2009msec 00:32:09.101 WRITE: bw=17.4MiB/s (18.3MB/s), 17.4MiB/s-17.4MiB/s (18.3MB/s-18.3MB/s), io=35.0MiB (36.7MB), run=2009-2009msec 00:32:09.101 ----------------------------------------------------- 00:32:09.101 Suppressions used: 00:32:09.101 count bytes template 00:32:09.101 1 58 /usr/src/fio/parse.c 00:32:09.101 1 8 libtcmalloc_minimal.so 00:32:09.101 ----------------------------------------------------- 00:32:09.101 00:32:09.101 05:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:09.358 05:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=51eeb16a-d2fc-4524-8835-1781d74d4f2e 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 51eeb16a-d2fc-4524-8835-1781d74d4f2e 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=51eeb16a-d2fc-4524-8835-1781d74d4f2e 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:10.729 05:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:10.729 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:10.729 { 00:32:10.729 "uuid": "aae3be2d-19e0-4f81-aa74-4f0fcd3bbd31", 00:32:10.730 "name": "lvs_0", 00:32:10.730 "base_bdev": "Nvme0n1", 00:32:10.730 "total_data_clusters": 930, 00:32:10.730 "free_clusters": 0, 00:32:10.730 "block_size": 512, 00:32:10.730 "cluster_size": 1073741824 00:32:10.730 }, 00:32:10.730 { 00:32:10.730 "uuid": "51eeb16a-d2fc-4524-8835-1781d74d4f2e", 00:32:10.730 "name": "lvs_n_0", 00:32:10.730 "base_bdev": "aefdc988-a883-4a2a-8120-5ac596be43ba", 00:32:10.730 "total_data_clusters": 237847, 00:32:10.730 "free_clusters": 237847, 00:32:10.730 "block_size": 512, 00:32:10.730 "cluster_size": 4194304 00:32:10.730 } 00:32:10.730 ]' 00:32:10.730 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="51eeb16a-d2fc-4524-8835-1781d74d4f2e") .free_clusters' 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="51eeb16a-d2fc-4524-8835-1781d74d4f2e") .cluster_size' 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:10.986 951388 00:32:10.986 05:21:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:11.941 598689b6-0d31-4897-82e7-8a0c8b908ab2 00:32:11.941 05:21:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:12.198 05:21:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:12.456 05:21:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:12.713 05:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:12.970 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:12.970 fio-3.35 00:32:12.970 Starting 1 thread 00:32:13.228 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.818 00:32:15.818 test: (groupid=0, jobs=1): err= 0: pid=819031: Sat Jul 13 05:21:21 2024 00:32:15.818 read: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(34.3MiB/2011msec) 00:32:15.818 slat (usec): min=2, max=156, avg= 3.73, stdev= 2.55 00:32:15.818 clat (usec): min=6105, max=26199, avg=16099.87, stdev=1494.99 00:32:15.818 lat (usec): min=6132, max=26203, avg=16103.60, stdev=1494.86 00:32:15.818 clat percentiles (usec): 00:32:15.818 | 1.00th=[12649], 5.00th=[13698], 10.00th=[14222], 20.00th=[15008], 00:32:15.818 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:32:15.818 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18220], 00:32:15.818 | 99.00th=[19268], 99.50th=[20055], 99.90th=[23987], 99.95th=[25822], 00:32:15.818 | 99.99th=[26084] 00:32:15.818 bw ( KiB/s): min=15960, max=18080, per=99.69%, avg=17404.00, stdev=980.76, samples=4 00:32:15.818 iops : min= 3990, max= 4520, avg=4351.00, stdev=245.19, samples=4 00:32:15.818 write: IOPS=4360, BW=17.0MiB/s (17.9MB/s)(34.2MiB/2011msec); 0 zone resets 00:32:15.818 slat (usec): min=3, max=123, avg= 3.95, stdev= 1.85 00:32:15.818 clat (usec): min=2913, max=22461, avg=12966.93, stdev=1263.95 00:32:15.818 lat (usec): min=2927, max=22465, avg=12970.88, stdev=1263.91 00:32:15.818 clat percentiles (usec): 00:32:15.818 | 1.00th=[10028], 5.00th=[11076], 10.00th=[11469], 20.00th=[11994], 00:32:15.818 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:32:15.818 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:32:15.818 | 99.00th=[15795], 99.50th=[16319], 99.90th=[20579], 99.95th=[22152], 00:32:15.818 | 99.99th=[22414] 00:32:15.818 bw ( KiB/s): min=16920, max=17728, per=99.94%, avg=17430.00, stdev=367.03, samples=4 00:32:15.818 iops : min= 4230, max= 4432, avg=4357.50, stdev=91.76, samples=4 00:32:15.818 lat (msec) : 4=0.02%, 10=0.54%, 20=99.11%, 50=0.34% 00:32:15.818 cpu : usr=60.60%, sys=36.67%, ctx=64, majf=0, minf=1532 00:32:15.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:15.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:15.818 issued rwts: total=8777,8768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:15.818 00:32:15.818 Run status group 0 (all jobs): 00:32:15.818 READ: bw=17.0MiB/s (17.9MB/s), 17.0MiB/s-17.0MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (35.9MB), run=2011-2011msec 00:32:15.818 WRITE: bw=17.0MiB/s (17.9MB/s), 17.0MiB/s-17.0MiB/s (17.9MB/s-17.9MB/s), io=34.2MiB (35.9MB), run=2011-2011msec 00:32:15.818 ----------------------------------------------------- 00:32:15.818 Suppressions used: 00:32:15.818 count bytes template 00:32:15.818 1 58 /usr/src/fio/parse.c 00:32:15.818 1 8 libtcmalloc_minimal.so 00:32:15.818 ----------------------------------------------------- 00:32:15.818 00:32:15.818 05:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:15.818 05:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:15.818 05:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:21.073 05:21:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:21.073 05:21:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:23.600 05:21:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:23.600 05:21:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:25.499 rmmod nvme_tcp 00:32:25.499 rmmod nvme_fabrics 00:32:25.499 rmmod nvme_keyring 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 815402 ']' 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 815402 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 815402 ']' 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 815402 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.499 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 815402 00:32:25.758 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:25.758 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:25.758 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 815402' 00:32:25.758 killing process with pid 815402 00:32:25.758 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 815402 00:32:25.758 05:21:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 815402 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.130 05:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.030 05:21:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:29.030 00:32:29.030 real 0m41.117s 00:32:29.030 user 2m35.490s 00:32:29.030 sys 0m8.078s 00:32:29.030 05:21:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:29.030 05:21:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.030 ************************************ 00:32:29.030 END TEST nvmf_fio_host 00:32:29.030 ************************************ 00:32:29.030 05:21:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:29.030 05:21:35 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:29.030 05:21:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:29.030 05:21:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.030 05:21:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.288 ************************************ 00:32:29.288 START TEST nvmf_failover 00:32:29.288 ************************************ 00:32:29.288 05:21:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:29.288 * Looking for test storage... 00:32:29.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:29.289 05:21:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:31.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:31.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:31.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:31.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:31.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:32:31.190 00:32:31.190 --- 10.0.0.2 ping statistics --- 00:32:31.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.190 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:32:31.190 00:32:31.190 --- 10.0.0.1 ping statistics --- 00:32:31.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.190 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=822534 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 822534 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 822534 ']' 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:31.190 05:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.447 [2024-07-13 05:21:37.755837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:31.447 [2024-07-13 05:21:37.756034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.447 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.447 [2024-07-13 05:21:37.889796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:31.703 [2024-07-13 05:21:38.143650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.703 [2024-07-13 05:21:38.143721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.703 [2024-07-13 05:21:38.143755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.703 [2024-07-13 05:21:38.143776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.703 [2024-07-13 05:21:38.143796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.703 [2024-07-13 05:21:38.143928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.703 [2024-07-13 05:21:38.144043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.703 [2024-07-13 05:21:38.144051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.267 05:21:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.524 [2024-07-13 05:21:38.913314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.524 05:21:38 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:32.782 Malloc0 00:32:32.782 05:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.038 05:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.296 05:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.583 [2024-07-13 05:21:39.983978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.583 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:33.841 [2024-07-13 05:21:40.248835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:33.841 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:34.099 [2024-07-13 05:21:40.501713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=822951 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 822951 /var/tmp/bdevperf.sock 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 822951 ']' 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:34.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.099 05:21:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:35.031 05:21:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:35.031 05:21:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:35.031 05:21:41 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:35.594 NVMe0n1 00:32:35.594 05:21:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:36.159 00:32:36.159 05:21:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=823102 00:32:36.159 05:21:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:36.159 05:21:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:37.094 05:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.352 [2024-07-13 05:21:43.706557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.706994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 [2024-07-13 05:21:43.707973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:37.352 05:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:40.629 05:21:46 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:40.629 00:32:40.629 05:21:47 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:40.885 [2024-07-13 05:21:47.310341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:40.885 05:21:47 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:44.159 05:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.159 [2024-07-13 05:21:50.580464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.159 05:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:45.533 05:21:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:45.533 05:21:51 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 823102 00:32:52.088 0 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 822951 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 822951 ']' 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 822951 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822951 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822951' 00:32:52.088 killing process with pid 822951 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 822951 00:32:52.088 05:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 822951 00:32:52.353 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:52.353 [2024-07-13 05:21:40.600512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:52.353 [2024-07-13 05:21:40.600679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822951 ] 00:32:52.353 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.353 [2024-07-13 05:21:40.727719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.353 [2024-07-13 05:21:40.961107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.353 Running I/O for 15 seconds... 00:32:52.353 [2024-07-13 05:21:43.709881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.709954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.353 [2024-07-13 05:21:43.710315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.710964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.710986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.353 [2024-07-13 05:21:43.711673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.353 [2024-07-13 05:21:43.711695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.711969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.711989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.354 [2024-07-13 05:21:43.712837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.712890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.712935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.712958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.712979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.354 [2024-07-13 05:21:43.713579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.354 [2024-07-13 05:21:43.713600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.713958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.713981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.714940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.355 [2024-07-13 05:21:43.714961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58976 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58984 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58992 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59000 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59008 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59016 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.355 [2024-07-13 05:21:43.715469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.355 [2024-07-13 05:21:43.715486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59024 len:8 PRP1 0x0 PRP2 0x0 00:32:52.355 [2024-07-13 05:21:43.715504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.355 [2024-07-13 05:21:43.715522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59032 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59040 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59048 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59056 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59064 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59072 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.715935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.715953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.715969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.715985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59080 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.716004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.716038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.716054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59088 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.716072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.356 [2024-07-13 05:21:43.716106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.356 [2024-07-13 05:21:43.716138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59096 len:8 PRP1 0x0 PRP2 0x0 00:32:52.356 [2024-07-13 05:21:43.716158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716448] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:32:52.356 [2024-07-13 05:21:43.716480] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:52.356 [2024-07-13 05:21:43.716536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.356 [2024-07-13 05:21:43.716562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.356 [2024-07-13 05:21:43.716605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.356 [2024-07-13 05:21:43.716643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.356 [2024-07-13 05:21:43.716682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:43.716700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.356 [2024-07-13 05:21:43.716797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:52.356 [2024-07-13 05:21:43.720636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.356 [2024-07-13 05:21:43.809579] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:52.356 [2024-07-13 05:21:47.311724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.311783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.311822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.311847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.311879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.311918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.311942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.311979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.356 [2024-07-13 05:21:47.312715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.356 [2024-07-13 05:21:47.312736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.312778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.312822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.312871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.312918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.312963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.312986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.357 [2024-07-13 05:21:47.313321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.313976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.313996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.357 [2024-07-13 05:21:47.314327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.357 [2024-07-13 05:21:47.314351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.314968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.314988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.315974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.315998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.316048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.316091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.316134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.316176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.358 [2024-07-13 05:21:47.316219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.358 [2024-07-13 05:21:47.316240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.359 [2024-07-13 05:21:47.316874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.316931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.316958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123664 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.316978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123672 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123680 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123688 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123696 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123704 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123712 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123720 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123728 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123736 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123744 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123752 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123760 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.317874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.359 [2024-07-13 05:21:47.317892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.359 [2024-07-13 05:21:47.317909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123768 len:8 PRP1 0x0 PRP2 0x0 00:32:52.359 [2024-07-13 05:21:47.317927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.318208] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:32:52.359 [2024-07-13 05:21:47.318238] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:52.359 [2024-07-13 05:21:47.318289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.359 [2024-07-13 05:21:47.318315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.318338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.359 [2024-07-13 05:21:47.318357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.318378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.359 [2024-07-13 05:21:47.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.359 [2024-07-13 05:21:47.318417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.359 [2024-07-13 05:21:47.318436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:47.318454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.360 [2024-07-13 05:21:47.318536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:52.360 [2024-07-13 05:21:47.322349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.360 [2024-07-13 05:21:47.452736] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:52.360 [2024-07-13 05:21:51.841754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.360 [2024-07-13 05:21:51.841883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.841913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.360 [2024-07-13 05:21:51.841935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.841955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.360 [2024-07-13 05:21:51.841975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.841995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.360 [2024-07-13 05:21:51.842013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:52.360 [2024-07-13 05:21:51.842281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.842539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.842972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.842992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.360 [2024-07-13 05:21:51.843264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.360 [2024-07-13 05:21:51.843899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.360 [2024-07-13 05:21:51.843923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.843944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.843966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.843986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.844967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.844987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.361 [2024-07-13 05:21:51.845501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.361 [2024-07-13 05:21:51.845523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.845981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.362 [2024-07-13 05:21:51.846815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.362 [2024-07-13 05:21:51.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.846973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.846994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.362 [2024-07-13 05:21:51.847381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.362 [2024-07-13 05:21:51.847404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.363 [2024-07-13 05:21:51.847854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.847881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(5) to be set 00:32:52.363 [2024-07-13 05:21:51.847919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.363 [2024-07-13 05:21:51.847938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.363 [2024-07-13 05:21:51.847956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106808 len:8 PRP1 0x0 PRP2 0x0 00:32:52.363 [2024-07-13 05:21:51.847974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.363 [2024-07-13 05:21:51.848258] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:32:52.363 [2024-07-13 05:21:51.848287] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:52.363 [2024-07-13 05:21:51.848310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.363 [2024-07-13 05:21:51.852218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.363 [2024-07-13 05:21:51.852294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:52.363 [2024-07-13 05:21:51.940407] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:52.363 00:32:52.363 Latency(us) 00:32:52.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.363 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.363 Verification LBA range: start 0x0 length 0x4000 00:32:52.363 NVMe0n1 : 15.02 6087.82 23.78 543.22 0.00 19267.95 801.00 22136.60 00:32:52.363 =================================================================================================================== 00:32:52.363 Total : 6087.82 23.78 543.22 0.00 19267.95 801.00 22136.60 00:32:52.363 Received shutdown signal, test time was about 15.000000 seconds 00:32:52.363 00:32:52.363 Latency(us) 00:32:52.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.363 =================================================================================================================== 00:32:52.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=825052 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 825052 /var/tmp/bdevperf.sock 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 825052 ']' 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.363 05:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:53.295 05:21:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.295 05:21:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:53.295 05:21:59 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:53.552 [2024-07-13 05:21:59.868324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:53.552 05:21:59 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:53.830 [2024-07-13 05:22:00.117166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:53.830 05:22:00 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.088 NVMe0n1 00:32:54.088 05:22:00 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.346 00:32:54.346 05:22:00 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.910 00:32:54.910 05:22:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:54.910 05:22:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:55.167 05:22:01 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:55.425 05:22:01 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:58.701 05:22:04 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:58.701 05:22:04 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:58.701 05:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=825736 00:32:58.701 05:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:58.701 05:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 825736 00:33:00.073 0 00:33:00.073 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:00.073 [2024-07-13 05:21:58.707518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:00.073 [2024-07-13 05:21:58.707680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825052 ] 00:33:00.073 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.073 [2024-07-13 05:21:58.834180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.073 [2024-07-13 05:21:59.064282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.073 [2024-07-13 05:22:01.712007] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:00.073 [2024-07-13 05:22:01.712138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.073 [2024-07-13 05:22:01.712172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.073 [2024-07-13 05:22:01.712209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.073 [2024-07-13 05:22:01.712229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.073 [2024-07-13 05:22:01.712258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.073 [2024-07-13 05:22:01.712278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.073 [2024-07-13 05:22:01.712299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.073 [2024-07-13 05:22:01.712319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.073 [2024-07-13 05:22:01.712338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:00.073 [2024-07-13 05:22:01.712427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:00.073 [2024-07-13 05:22:01.712478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:00.073 [2024-07-13 05:22:01.805072] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:00.073 Running I/O for 1 seconds... 00:33:00.073 00:33:00.073 Latency(us) 00:33:00.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:00.073 Verification LBA range: start 0x0 length 0x4000 00:33:00.073 NVMe0n1 : 1.02 6286.65 24.56 0.00 0.00 20269.22 3980.71 19126.80 00:33:00.073 =================================================================================================================== 00:33:00.073 Total : 6286.65 24.56 0.00 0.00 20269.22 3980.71 19126.80 00:33:00.073 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:00.073 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:00.073 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:00.331 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:00.331 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:00.588 05:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:00.846 05:22:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 825052 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 825052 ']' 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 825052 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 825052 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 825052' 00:33:04.125 killing process with pid 825052 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 825052 00:33:04.125 05:22:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 825052 00:33:05.061 05:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:05.061 05:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.319 rmmod nvme_tcp 00:33:05.319 rmmod nvme_fabrics 00:33:05.319 rmmod nvme_keyring 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 822534 ']' 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 822534 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 822534 ']' 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 822534 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822534 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822534' 00:33:05.319 killing process with pid 822534 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 822534 00:33:05.319 05:22:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 822534 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.693 05:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.220 05:22:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:09.220 00:33:09.220 real 0m39.682s 00:33:09.220 user 2m16.663s 00:33:09.220 sys 0m7.044s 00:33:09.220 05:22:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:09.220 05:22:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.220 ************************************ 00:33:09.220 END TEST nvmf_failover 00:33:09.220 ************************************ 00:33:09.220 05:22:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:09.220 05:22:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:09.220 05:22:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:09.220 05:22:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:09.220 05:22:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:09.220 ************************************ 00:33:09.220 START TEST nvmf_host_discovery 00:33:09.220 ************************************ 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:09.220 * Looking for test storage... 00:33:09.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.220 05:22:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:09.221 05:22:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:11.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:11.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:11.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:11.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.122 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:11.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:33:11.123 00:33:11.123 --- 10.0.0.2 ping statistics --- 00:33:11.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.123 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:33:11.123 00:33:11.123 --- 10.0.0.1 ping statistics --- 00:33:11.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.123 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=828588 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 828588 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 828588 ']' 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.123 05:22:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.123 [2024-07-13 05:22:17.526837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:11.123 [2024-07-13 05:22:17.527002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.123 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.381 [2024-07-13 05:22:17.667128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.670 [2024-07-13 05:22:17.926199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.670 [2024-07-13 05:22:17.926259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.670 [2024-07-13 05:22:17.926287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.670 [2024-07-13 05:22:17.926310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.670 [2024-07-13 05:22:17.926332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.670 [2024-07-13 05:22:17.926376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 [2024-07-13 05:22:18.519200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 [2024-07-13 05:22:18.527395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 null0 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 null1 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=828743 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 828743 /tmp/host.sock 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 828743 ']' 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:12.238 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.238 05:22:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.238 [2024-07-13 05:22:18.635446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:12.238 [2024-07-13 05:22:18.635593] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828743 ] 00:33:12.238 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.496 [2024-07-13 05:22:18.774987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.755 [2024-07-13 05:22:19.029029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.322 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.581 [2024-07-13 05:22:19.907443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.581 05:22:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.581 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:13.582 05:22:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:14.516 [2024-07-13 05:22:20.662052] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:14.516 [2024-07-13 05:22:20.662092] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:14.516 [2024-07-13 05:22:20.662135] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:14.516 [2024-07-13 05:22:20.748454] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:14.516 [2024-07-13 05:22:20.973010] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:14.516 [2024-07-13 05:22:20.973044] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.775 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.034 [2024-07-13 05:22:21.344279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:15.034 [2024-07-13 05:22:21.345555] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:15.034 [2024-07-13 05:22:21.345614] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:15.034 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:15.035 [2024-07-13 05:22:21.432469] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:15.035 05:22:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:15.035 [2024-07-13 05:22:21.494226] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:15.035 [2024-07-13 05:22:21.494272] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:15.035 [2024-07-13 05:22:21.494292] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.411 [2024-07-13 05:22:22.564932] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:16.411 [2024-07-13 05:22:22.564993] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:16.411 [2024-07-13 05:22:22.573685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.411 [2024-07-13 05:22:22.573747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.411 [2024-07-13 05:22:22.573776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.411 [2024-07-13 05:22:22.573799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.411 [2024-07-13 05:22:22.573820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.411 [2024-07-13 05:22:22.573840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.411 [2024-07-13 05:22:22.573862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:16.411 [2024-07-13 05:22:22.573900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.411 [2024-07-13 05:22:22.573927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.411 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.411 [2024-07-13 05:22:22.583665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.411 [2024-07-13 05:22:22.593711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.411 [2024-07-13 05:22:22.594038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.411 [2024-07-13 05:22:22.594085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.411 [2024-07-13 05:22:22.594112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.411 [2024-07-13 05:22:22.594149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.411 [2024-07-13 05:22:22.594213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.411 [2024-07-13 05:22:22.594244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.594270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.594325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 [2024-07-13 05:22:22.603832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.604092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.604129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.604164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.604196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.604237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.604257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.604291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.604354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.412 [2024-07-13 05:22:22.613958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:16.412 [2024-07-13 05:22:22.614186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.614228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.614256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.614291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.614352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.614381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.614402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.614431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 [2024-07-13 05:22:22.624071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.624349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.624388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.624412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.624445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.624476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.624497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.624516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.624544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 [2024-07-13 05:22:22.634193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.634396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.634433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.634456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.634488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.634534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.634559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.634579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.634607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.412 [2024-07-13 05:22:22.644283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.644565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.644602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.644625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.644657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.644704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.644729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.644748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.644777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:16.412 [2024-07-13 05:22:22.654389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.654632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.654669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.654692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.654725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.654786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.654814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.654834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.654863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:16.412 [2024-07-13 05:22:22.664497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.664743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.664781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.664805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.664838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.664877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.664901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.664920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.664948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.412 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.412 [2024-07-13 05:22:22.674612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.412 [2024-07-13 05:22:22.674923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.412 [2024-07-13 05:22:22.674960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.412 [2024-07-13 05:22:22.674997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.412 [2024-07-13 05:22:22.675031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.412 [2024-07-13 05:22:22.675077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.412 [2024-07-13 05:22:22.675104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.412 [2024-07-13 05:22:22.675125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.412 [2024-07-13 05:22:22.675164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.413 [2024-07-13 05:22:22.684724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:16.413 [2024-07-13 05:22:22.685048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.413 [2024-07-13 05:22:22.685086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:16.413 [2024-07-13 05:22:22.685109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:16.413 [2024-07-13 05:22:22.685142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:16.413 [2024-07-13 05:22:22.685172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:16.413 [2024-07-13 05:22:22.685194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:16.413 [2024-07-13 05:22:22.685212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:16.413 [2024-07-13 05:22:22.685241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:16.413 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:16.413 05:22:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:16.413 [2024-07-13 05:22:22.692349] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:16.413 [2024-07-13 05:22:22.692411] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:17.344 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:17.344 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.345 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.603 05:22:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.537 [2024-07-13 05:22:24.973099] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:18.537 [2024-07-13 05:22:24.973163] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:18.537 [2024-07-13 05:22:24.973223] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:18.796 [2024-07-13 05:22:25.059497] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:18.796 [2024-07-13 05:22:25.289935] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:18.796 [2024-07-13 05:22:25.290029] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:18.796 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 request: 00:33:19.055 { 00:33:19.055 "name": "nvme", 00:33:19.055 "trtype": "tcp", 00:33:19.055 "traddr": "10.0.0.2", 00:33:19.055 "adrfam": "ipv4", 00:33:19.055 "trsvcid": "8009", 00:33:19.055 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:19.055 "wait_for_attach": true, 00:33:19.055 "method": "bdev_nvme_start_discovery", 00:33:19.055 "req_id": 1 00:33:19.055 } 00:33:19.055 Got JSON-RPC error response 00:33:19.055 response: 00:33:19.055 { 00:33:19.055 "code": -17, 00:33:19.055 "message": "File exists" 00:33:19.055 } 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 request: 00:33:19.055 { 00:33:19.055 "name": "nvme_second", 00:33:19.055 "trtype": "tcp", 00:33:19.055 "traddr": "10.0.0.2", 00:33:19.055 "adrfam": "ipv4", 00:33:19.055 "trsvcid": "8009", 00:33:19.055 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:19.055 "wait_for_attach": true, 00:33:19.055 "method": "bdev_nvme_start_discovery", 00:33:19.055 "req_id": 1 00:33:19.055 } 00:33:19.055 Got JSON-RPC error response 00:33:19.055 response: 00:33:19.055 { 00:33:19.055 "code": -17, 00:33:19.055 "message": "File exists" 00:33:19.055 } 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.055 05:22:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.428 [2024-07-13 05:22:26.501760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.428 [2024-07-13 05:22:26.501829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:20.428 [2024-07-13 05:22:26.501936] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:20.428 [2024-07-13 05:22:26.501966] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:20.428 [2024-07-13 05:22:26.501988] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:21.361 [2024-07-13 05:22:27.504234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.361 [2024-07-13 05:22:27.504297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:21.361 [2024-07-13 05:22:27.504375] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:21.361 [2024-07-13 05:22:27.504400] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:21.361 [2024-07-13 05:22:27.504422] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:22.294 [2024-07-13 05:22:28.506280] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:22.294 request: 00:33:22.294 { 00:33:22.294 "name": "nvme_second", 00:33:22.294 "trtype": "tcp", 00:33:22.294 "traddr": "10.0.0.2", 00:33:22.294 "adrfam": "ipv4", 00:33:22.294 "trsvcid": "8010", 00:33:22.294 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.294 "wait_for_attach": false, 00:33:22.294 "attach_timeout_ms": 3000, 00:33:22.294 "method": "bdev_nvme_start_discovery", 00:33:22.294 "req_id": 1 00:33:22.294 } 00:33:22.294 Got JSON-RPC error response 00:33:22.294 response: 00:33:22.294 { 00:33:22.294 "code": -110, 00:33:22.294 "message": "Connection timed out" 00:33:22.294 } 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:22.294 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 828743 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:22.295 rmmod nvme_tcp 00:33:22.295 rmmod nvme_fabrics 00:33:22.295 rmmod nvme_keyring 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 828588 ']' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 828588 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 828588 ']' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 828588 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 828588 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 828588' 00:33:22.295 killing process with pid 828588 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 828588 00:33:22.295 05:22:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 828588 00:33:23.673 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:23.673 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:23.673 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:23.673 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:23.674 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:23.674 05:22:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.674 05:22:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.674 05:22:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.570 05:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:25.570 00:33:25.570 real 0m16.730s 00:33:25.570 user 0m25.372s 00:33:25.570 sys 0m3.207s 00:33:25.570 05:22:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:25.570 05:22:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:25.570 ************************************ 00:33:25.570 END TEST nvmf_host_discovery 00:33:25.570 ************************************ 00:33:25.570 05:22:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:25.570 05:22:32 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:25.570 05:22:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:25.570 05:22:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:25.570 05:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:25.570 ************************************ 00:33:25.570 START TEST nvmf_host_multipath_status 00:33:25.570 ************************************ 00:33:25.570 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:25.828 * Looking for test storage... 00:33:25.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:25.828 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:25.829 05:22:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:27.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:27.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:27.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:27.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.748 05:22:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:27.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:33:27.748 00:33:27.748 --- 10.0.0.2 ping statistics --- 00:33:27.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.748 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:33:27.748 00:33:27.748 --- 10.0.0.1 ping statistics --- 00:33:27.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.748 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:27.748 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=832166 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 832166 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 832166 ']' 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.749 05:22:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:27.749 [2024-07-13 05:22:34.168361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:27.749 [2024-07-13 05:22:34.168494] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.749 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.006 [2024-07-13 05:22:34.305712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:28.264 [2024-07-13 05:22:34.564443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.264 [2024-07-13 05:22:34.564532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.264 [2024-07-13 05:22:34.564567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.264 [2024-07-13 05:22:34.564588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.264 [2024-07-13 05:22:34.564612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.264 [2024-07-13 05:22:34.564727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.264 [2024-07-13 05:22:34.564736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=832166 00:33:28.830 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:29.086 [2024-07-13 05:22:35.372700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.086 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:29.344 Malloc0 00:33:29.344 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:29.602 05:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:29.859 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.117 [2024-07-13 05:22:36.454329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.117 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:30.375 [2024-07-13 05:22:36.694999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=832456 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 832456 /var/tmp/bdevperf.sock 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 832456 ']' 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:30.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.375 05:22:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.307 05:22:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.307 05:22:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:31.307 05:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:31.565 05:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:32.130 Nvme0n1 00:33:32.130 05:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:32.698 Nvme0n1 00:33:32.698 05:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:32.698 05:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:34.603 05:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:34.603 05:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:34.862 05:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:35.120 05:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:36.055 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:36.055 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.055 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.055 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.313 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.313 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:36.313 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.313 05:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.570 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.570 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.570 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.570 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.828 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.828 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.828 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.828 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:37.086 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.086 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:37.086 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.086 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:37.344 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.344 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:37.344 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.344 05:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.603 05:22:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.603 05:22:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:37.603 05:22:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.861 05:22:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:38.119 05:22:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:39.054 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:39.054 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:39.054 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.054 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:39.311 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.311 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:39.311 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.311 05:22:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.569 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.569 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.569 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.569 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.827 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.827 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.827 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.827 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:40.084 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.084 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:40.084 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.084 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.342 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.342 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.342 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.342 05:22:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.600 05:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.600 05:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:40.600 05:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:40.858 05:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:41.123 05:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:42.090 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:42.090 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:42.090 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.090 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.348 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.348 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:42.348 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.348 05:22:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.606 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.606 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.606 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.606 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.863 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.863 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.863 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.863 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.121 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.121 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:43.121 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.121 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.379 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.379 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:43.379 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.379 05:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.638 05:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.638 05:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:43.638 05:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:43.896 05:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:44.155 05:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.532 05:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.790 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.790 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.790 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.790 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.048 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.048 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.048 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.048 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.306 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.306 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:46.306 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.306 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.578 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.578 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:46.578 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.578 05:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.842 05:22:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.842 05:22:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:46.842 05:22:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:47.100 05:22:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:47.358 05:22:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:48.292 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:48.292 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:48.292 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.292 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.550 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.550 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:48.550 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.550 05:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.808 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.808 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.808 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.808 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:49.066 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.066 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:49.066 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.066 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.325 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.325 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:49.325 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.325 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.583 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.583 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:49.583 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.583 05:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.841 05:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.841 05:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:49.841 05:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:50.100 05:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:50.358 05:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:51.294 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:51.294 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:51.294 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.294 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.552 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.552 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:51.552 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.552 05:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.811 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.811 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.811 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.811 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.069 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.069 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.069 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.069 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:52.328 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.328 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:52.328 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.328 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:52.587 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.587 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:52.587 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.587 05:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:52.846 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.846 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:53.105 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:53.105 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:53.363 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:53.621 05:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:54.558 05:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:54.558 05:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:54.558 05:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.558 05:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:54.831 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.831 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:54.831 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.831 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.140 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.140 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.140 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.140 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.399 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.399 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.399 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.399 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:55.657 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.657 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:55.657 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.657 05:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:55.915 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.915 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:55.915 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.915 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.174 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.174 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:56.174 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:56.174 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:56.445 05:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:57.823 05:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:57.823 05:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:57.823 05:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.823 05:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.823 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.823 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:57.823 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.824 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:58.083 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.083 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:58.083 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.083 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:58.341 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.341 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:58.341 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.341 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:58.598 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.598 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:58.598 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.598 05:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:58.856 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.856 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:58.856 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.856 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.114 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.114 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:59.114 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:59.372 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:59.630 05:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:00.563 05:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:00.563 05:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:00.563 05:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.563 05:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:00.820 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.820 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:00.820 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.820 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.078 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.078 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.078 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.078 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:01.334 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.334 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:01.334 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.334 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:01.590 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.590 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:01.590 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.590 05:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:01.847 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.847 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:01.847 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.847 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.105 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.105 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:02.105 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:02.361 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:02.619 05:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:03.552 05:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:03.552 05:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:03.552 05:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.552 05:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:03.809 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.809 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:03.809 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.809 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.067 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.067 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.067 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.067 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:04.325 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.325 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:04.325 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.325 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:04.583 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.583 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:04.583 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.583 05:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:04.840 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.840 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:04.840 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.840 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 832456 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 832456 ']' 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 832456 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 832456 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 832456' 00:34:05.098 killing process with pid 832456 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 832456 00:34:05.098 05:23:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 832456 00:34:05.661 Connection closed with partial response: 00:34:05.661 00:34:05.661 00:34:06.237 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 832456 00:34:06.237 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.237 [2024-07-13 05:22:36.790999] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:06.237 [2024-07-13 05:22:36.791151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832456 ] 00:34:06.237 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.237 [2024-07-13 05:22:36.915145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.237 [2024-07-13 05:22:37.146567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.237 Running I/O for 90 seconds... 00:34:06.237 [2024-07-13 05:22:53.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.355798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.355852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.355890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.355949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.355976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.356777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.356801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.357162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.357228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.357289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.237 [2024-07-13 05:22:53.357348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.237 [2024-07-13 05:22:53.357408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.237 [2024-07-13 05:22:53.357484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.237 [2024-07-13 05:22:53.357557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.237 [2024-07-13 05:22:53.357595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.237 [2024-07-13 05:22:53.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.357965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.357990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.358358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.358414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.358470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.358525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.358558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.358582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.359950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.359984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.360942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.360978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.361003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.361038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.361063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.361097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.238 [2024-07-13 05:22:53.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.361157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.238 [2024-07-13 05:22:53.361198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.238 [2024-07-13 05:22:53.361233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.361991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.362980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.239 [2024-07-13 05:22:53.363840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.239 [2024-07-13 05:22:53.363873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.363912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.363938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.363973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.364003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.364040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.364065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.364101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.364126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.364818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.364925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.364956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.365383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.365953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.365977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.366943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.366979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.240 [2024-07-13 05:22:53.367506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.240 [2024-07-13 05:22:53.367540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.240 [2024-07-13 05:22:53.367565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.367957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.241 [2024-07-13 05:22:53.368617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.368677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.368749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.368809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.368882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.368946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.368980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.241 [2024-07-13 05:22:53.369797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.241 [2024-07-13 05:22:53.369842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.369875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.369914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.369939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.242 [2024-07-13 05:22:53.371531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.371953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.371989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.372981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.242 [2024-07-13 05:22:53.373660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.242 [2024-07-13 05:22:53.373695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.373720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.373755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.373825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.373855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.373903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.373933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.373969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.373995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.374548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.374573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.375881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.375940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.375979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.243 [2024-07-13 05:22:53.376647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.376749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.376852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.376923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.243 [2024-07-13 05:22:53.377503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.243 [2024-07-13 05:22:53.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.377945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.377970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.378095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.378175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.378251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.378979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.244 [2024-07-13 05:22:53.379355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.379960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.244 [2024-07-13 05:22:53.380450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.244 [2024-07-13 05:22:53.380475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.380534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.380567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.380591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.380644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.380671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.381875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.381919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.381972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.381998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.245 [2024-07-13 05:22:53.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.382962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.382988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.245 [2024-07-13 05:22:53.383506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.245 [2024-07-13 05:22:53.383542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.383961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.383986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.384942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.384967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.385027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.385061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.385086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.385121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.385162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.246 [2024-07-13 05:22:53.385203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.246 [2024-07-13 05:22:53.385227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.385267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.385307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.385344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.385369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.385403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.385428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.385463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.385488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.385540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.385565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.386279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.386882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.386951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.386986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.247 [2024-07-13 05:22:53.387396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.387958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.387993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.247 [2024-07-13 05:22:53.388577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.247 [2024-07-13 05:22:53.388602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.388697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.388755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.388811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.388893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.388958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.388993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.248 [2024-07-13 05:22:53.389766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.389834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.389923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.390915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.390941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.391976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.392014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.392058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.392085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.392121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.392146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.392182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.392216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.392279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.248 [2024-07-13 05:22:53.392319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.248 [2024-07-13 05:22:53.392353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.249 [2024-07-13 05:22:53.392393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.249 [2024-07-13 05:22:53.392451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.249 [2024-07-13 05:22:53.392509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.249 [2024-07-13 05:22:53.392577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.249 [2024-07-13 05:22:53.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.392720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.392776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.392903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.392946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.392971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.393947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.393984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.395016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.395041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.395075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.249 [2024-07-13 05:22:53.395100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.249 [2024-07-13 05:22:53.395135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.395561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.395585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.396372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.396439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.396501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.396963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.396998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.250 [2024-07-13 05:22:53.397551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.397968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.398028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.398064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.398090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.398125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.398150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.398199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.398230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.250 [2024-07-13 05:22:53.398265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.250 [2024-07-13 05:22:53.398288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.398950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.398985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.251 [2024-07-13 05:22:53.399921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.399956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.399980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.251 [2024-07-13 05:22:53.400864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.251 [2024-07-13 05:22:53.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.400961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.400986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.252 [2024-07-13 05:22:53.402753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.402812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.402896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.402946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.402970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.403950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.252 [2024-07-13 05:22:53.404705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.252 [2024-07-13 05:22:53.404729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.404764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.404789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.404824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.404848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.404893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.404920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.404972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.404997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.405035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.405059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.405093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.405117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.405167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.902971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.902999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.903597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.903631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.903708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.903739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.903784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.903811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.903859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.903942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.903984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.904562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.904648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.904721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.904787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.904895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.904943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.904974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.905050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.905121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.253 [2024-07-13 05:22:53.905189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.253 [2024-07-13 05:22:53.905835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.253 [2024-07-13 05:22:53.905902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.905957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.906940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.906990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.907933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.907963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.908042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.908111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.908196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.254 [2024-07-13 05:22:53.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.254 [2024-07-13 05:22:53.908861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.254 [2024-07-13 05:22:53.908911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.908960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.908987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:22:53.909816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:22:53.909848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.970890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.970985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.971812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.971944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.971990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.255 [2024-07-13 05:23:08.972013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.972415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.972440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.973528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.973562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.255 [2024-07-13 05:23:08.973617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.255 [2024-07-13 05:23:08.973646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.973683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.973830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.973881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.973927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.973955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.973993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.974311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.974430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.974625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.974653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.256 [2024-07-13 05:23:08.976840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.976956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.976998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.256 [2024-07-13 05:23:08.977038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.256 [2024-07-13 05:23:08.977079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.256 Received shutdown signal, test time was about 32.371213 seconds 00:34:06.256 00:34:06.256 Latency(us) 00:34:06.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.256 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:06.256 Verification LBA range: start 0x0 length 0x4000 00:34:06.256 Nvme0n1 : 32.37 5772.31 22.55 0.00 0.00 22141.59 2148.12 3579139.41 00:34:06.256 =================================================================================================================== 00:34:06.256 Total : 5772.31 22.55 0.00 0.00 22141.59 2148.12 3579139.41 00:34:06.256 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:06.515 rmmod nvme_tcp 00:34:06.515 rmmod nvme_fabrics 00:34:06.515 rmmod nvme_keyring 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 832166 ']' 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 832166 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 832166 ']' 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 832166 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 832166 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 832166' 00:34:06.515 killing process with pid 832166 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 832166 00:34:06.515 05:23:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 832166 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.890 05:23:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.420 05:23:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:10.420 00:34:10.420 real 0m44.336s 00:34:10.420 user 2m11.413s 00:34:10.420 sys 0m10.085s 00:34:10.420 05:23:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.421 05:23:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:10.421 ************************************ 00:34:10.421 END TEST nvmf_host_multipath_status 00:34:10.421 ************************************ 00:34:10.421 05:23:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:10.421 05:23:16 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:10.421 05:23:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:10.421 05:23:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.421 05:23:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:10.421 ************************************ 00:34:10.421 START TEST nvmf_discovery_remove_ifc 00:34:10.421 ************************************ 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:10.421 * Looking for test storage... 00:34:10.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:10.421 05:23:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:12.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:12.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:12.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:12.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:12.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:34:12.325 00:34:12.325 --- 10.0.0.2 ping statistics --- 00:34:12.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.325 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:34:12.325 00:34:12.325 --- 10.0.0.1 ping statistics --- 00:34:12.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.325 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=838908 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 838908 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 838908 ']' 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:12.325 05:23:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.325 [2024-07-13 05:23:18.608471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:12.325 [2024-07-13 05:23:18.608629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.325 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.325 [2024-07-13 05:23:18.750935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.584 [2024-07-13 05:23:19.007129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.584 [2024-07-13 05:23:19.007200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.584 [2024-07-13 05:23:19.007229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.584 [2024-07-13 05:23:19.007254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.584 [2024-07-13 05:23:19.007281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.584 [2024-07-13 05:23:19.007332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.151 [2024-07-13 05:23:19.554935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.151 [2024-07-13 05:23:19.563116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:13.151 null0 00:34:13.151 [2024-07-13 05:23:19.595041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=839065 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 839065 /tmp/host.sock 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 839065 ']' 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:13.151 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:13.151 05:23:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.409 [2024-07-13 05:23:19.695202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:13.409 [2024-07-13 05:23:19.695361] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839065 ] 00:34:13.409 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.409 [2024-07-13 05:23:19.819464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.667 [2024-07-13 05:23:20.071319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.234 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.493 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.493 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:14.493 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.493 05:23:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.867 [2024-07-13 05:23:21.989043] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:15.867 [2024-07-13 05:23:21.989101] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:15.867 [2024-07-13 05:23:21.989146] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:15.867 [2024-07-13 05:23:22.115607] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:15.867 [2024-07-13 05:23:22.341240] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:15.867 [2024-07-13 05:23:22.341362] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:15.867 [2024-07-13 05:23:22.341464] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:15.867 [2024-07-13 05:23:22.341513] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:15.867 [2024-07-13 05:23:22.341579] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.867 [2024-07-13 05:23:22.346065] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.867 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.125 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.126 05:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.057 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.057 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.058 05:23:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:18.431 05:23:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.365 05:23:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:20.298 05:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:21.229 05:23:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:21.486 [2024-07-13 05:23:27.782698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:21.486 [2024-07-13 05:23:27.782807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.486 [2024-07-13 05:23:27.782844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.486 [2024-07-13 05:23:27.782887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.486 [2024-07-13 05:23:27.782929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.486 [2024-07-13 05:23:27.782950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.486 [2024-07-13 05:23:27.782968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.486 [2024-07-13 05:23:27.782987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.486 [2024-07-13 05:23:27.783007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.486 [2024-07-13 05:23:27.783026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.486 [2024-07-13 05:23:27.783045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.486 [2024-07-13 05:23:27.783064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:21.486 [2024-07-13 05:23:27.792704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:21.486 [2024-07-13 05:23:27.802760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.418 [2024-07-13 05:23:28.861904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:22.418 [2024-07-13 05:23:28.861972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:22.418 [2024-07-13 05:23:28.862008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:22.418 [2024-07-13 05:23:28.862060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:22.418 [2024-07-13 05:23:28.862716] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:22.418 [2024-07-13 05:23:28.862765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:22.418 [2024-07-13 05:23:28.862800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:22.418 [2024-07-13 05:23:28.862828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:22.418 [2024-07-13 05:23:28.862881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.418 [2024-07-13 05:23:28.862927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:22.418 05:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:23.790 [2024-07-13 05:23:29.865462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.790 [2024-07-13 05:23:29.865510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.791 [2024-07-13 05:23:29.865534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.791 [2024-07-13 05:23:29.865556] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:23.791 [2024-07-13 05:23:29.865603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.791 [2024-07-13 05:23:29.865677] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:23.791 [2024-07-13 05:23:29.865750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.791 [2024-07-13 05:23:29.865787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.791 [2024-07-13 05:23:29.865820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.791 [2024-07-13 05:23:29.865845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.791 [2024-07-13 05:23:29.865879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.791 [2024-07-13 05:23:29.865906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.791 [2024-07-13 05:23:29.865942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.791 [2024-07-13 05:23:29.865969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.791 [2024-07-13 05:23:29.865992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.791 [2024-07-13 05:23:29.866011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.791 [2024-07-13 05:23:29.866029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:23.791 [2024-07-13 05:23:29.866105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:23.791 [2024-07-13 05:23:29.867101] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:23.791 [2024-07-13 05:23:29.867131] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.791 05:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.791 05:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:23.791 05:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:24.752 05:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:25.686 [2024-07-13 05:23:31.924092] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:25.686 [2024-07-13 05:23:31.924162] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:25.686 [2024-07-13 05:23:31.924217] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:25.686 [2024-07-13 05:23:32.010514] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.686 [2024-07-13 05:23:32.076029] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:25.686 [2024-07-13 05:23:32.076106] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:25.686 [2024-07-13 05:23:32.076229] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:25.686 [2024-07-13 05:23:32.076273] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:25.686 [2024-07-13 05:23:32.076298] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:25.686 [2024-07-13 05:23:32.082738] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:25.686 05:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:26.619 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 839065 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 839065 ']' 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 839065 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839065 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839065' 00:34:26.877 killing process with pid 839065 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 839065 00:34:26.877 05:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 839065 00:34:27.812 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.813 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.813 rmmod nvme_tcp 00:34:27.813 rmmod nvme_fabrics 00:34:28.069 rmmod nvme_keyring 00:34:28.069 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:28.069 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 838908 ']' 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 838908 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 838908 ']' 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 838908 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 838908 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 838908' 00:34:28.070 killing process with pid 838908 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 838908 00:34:28.070 05:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 838908 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.441 05:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.342 05:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:31.343 00:34:31.343 real 0m21.329s 00:34:31.343 user 0m31.458s 00:34:31.343 sys 0m3.216s 00:34:31.343 05:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:31.343 05:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.343 ************************************ 00:34:31.343 END TEST nvmf_discovery_remove_ifc 00:34:31.343 ************************************ 00:34:31.343 05:23:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:31.343 05:23:37 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:31.343 05:23:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:31.343 05:23:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.343 05:23:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:31.343 ************************************ 00:34:31.343 START TEST nvmf_identify_kernel_target 00:34:31.343 ************************************ 00:34:31.343 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:31.601 * Looking for test storage... 00:34:31.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:31.601 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:34:31.602 05:23:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.504 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:33.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:33.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:33.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:33.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.505 05:23:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:34:33.765 00:34:33.765 --- 10.0.0.2 ping statistics --- 00:34:33.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.765 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:34:33.765 00:34:33.765 --- 10.0.0.1 ping statistics --- 00:34:33.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.765 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:34:33.765 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:33.766 05:23:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:34.702 Waiting for block devices as requested 00:34:34.702 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:34.961 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:34.961 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:35.220 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:35.220 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:35.220 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:35.220 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:35.220 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:35.479 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:35.479 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:35.479 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:35.479 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:35.738 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:35.738 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:35.738 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:35.996 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:35.996 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:35.996 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:36.254 No valid GPT data, bailing 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:36.255 00:34:36.255 Discovery Log Number of Records 2, Generation counter 2 00:34:36.255 =====Discovery Log Entry 0====== 00:34:36.255 trtype: tcp 00:34:36.255 adrfam: ipv4 00:34:36.255 subtype: current discovery subsystem 00:34:36.255 treq: not specified, sq flow control disable supported 00:34:36.255 portid: 1 00:34:36.255 trsvcid: 4420 00:34:36.255 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:36.255 traddr: 10.0.0.1 00:34:36.255 eflags: none 00:34:36.255 sectype: none 00:34:36.255 =====Discovery Log Entry 1====== 00:34:36.255 trtype: tcp 00:34:36.255 adrfam: ipv4 00:34:36.255 subtype: nvme subsystem 00:34:36.255 treq: not specified, sq flow control disable supported 00:34:36.255 portid: 1 00:34:36.255 trsvcid: 4420 00:34:36.255 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:36.255 traddr: 10.0.0.1 00:34:36.255 eflags: none 00:34:36.255 sectype: none 00:34:36.255 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:36.255 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:36.255 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.515 ===================================================== 00:34:36.515 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:36.515 ===================================================== 00:34:36.516 Controller Capabilities/Features 00:34:36.516 ================================ 00:34:36.516 Vendor ID: 0000 00:34:36.516 Subsystem Vendor ID: 0000 00:34:36.516 Serial Number: 40eb54732086b00e73c9 00:34:36.516 Model Number: Linux 00:34:36.516 Firmware Version: 6.7.0-68 00:34:36.516 Recommended Arb Burst: 0 00:34:36.516 IEEE OUI Identifier: 00 00 00 00:34:36.516 Multi-path I/O 00:34:36.516 May have multiple subsystem ports: No 00:34:36.516 May have multiple controllers: No 00:34:36.516 Associated with SR-IOV VF: No 00:34:36.516 Max Data Transfer Size: Unlimited 00:34:36.516 Max Number of Namespaces: 0 00:34:36.516 Max Number of I/O Queues: 1024 00:34:36.516 NVMe Specification Version (VS): 1.3 00:34:36.516 NVMe Specification Version (Identify): 1.3 00:34:36.516 Maximum Queue Entries: 1024 00:34:36.516 Contiguous Queues Required: No 00:34:36.516 Arbitration Mechanisms Supported 00:34:36.516 Weighted Round Robin: Not Supported 00:34:36.516 Vendor Specific: Not Supported 00:34:36.516 Reset Timeout: 7500 ms 00:34:36.516 Doorbell Stride: 4 bytes 00:34:36.516 NVM Subsystem Reset: Not Supported 00:34:36.516 Command Sets Supported 00:34:36.516 NVM Command Set: Supported 00:34:36.516 Boot Partition: Not Supported 00:34:36.516 Memory Page Size Minimum: 4096 bytes 00:34:36.516 Memory Page Size Maximum: 4096 bytes 00:34:36.516 Persistent Memory Region: Not Supported 00:34:36.516 Optional Asynchronous Events Supported 00:34:36.516 Namespace Attribute Notices: Not Supported 00:34:36.516 Firmware Activation Notices: Not Supported 00:34:36.516 ANA Change Notices: Not Supported 00:34:36.516 PLE Aggregate Log Change Notices: Not Supported 00:34:36.516 LBA Status Info Alert Notices: Not Supported 00:34:36.516 EGE Aggregate Log Change Notices: Not Supported 00:34:36.516 Normal NVM Subsystem Shutdown event: Not Supported 00:34:36.516 Zone Descriptor Change Notices: Not Supported 00:34:36.516 Discovery Log Change Notices: Supported 00:34:36.516 Controller Attributes 00:34:36.516 128-bit Host Identifier: Not Supported 00:34:36.516 Non-Operational Permissive Mode: Not Supported 00:34:36.516 NVM Sets: Not Supported 00:34:36.516 Read Recovery Levels: Not Supported 00:34:36.516 Endurance Groups: Not Supported 00:34:36.516 Predictable Latency Mode: Not Supported 00:34:36.516 Traffic Based Keep ALive: Not Supported 00:34:36.516 Namespace Granularity: Not Supported 00:34:36.516 SQ Associations: Not Supported 00:34:36.516 UUID List: Not Supported 00:34:36.516 Multi-Domain Subsystem: Not Supported 00:34:36.516 Fixed Capacity Management: Not Supported 00:34:36.516 Variable Capacity Management: Not Supported 00:34:36.516 Delete Endurance Group: Not Supported 00:34:36.516 Delete NVM Set: Not Supported 00:34:36.516 Extended LBA Formats Supported: Not Supported 00:34:36.517 Flexible Data Placement Supported: Not Supported 00:34:36.517 00:34:36.517 Controller Memory Buffer Support 00:34:36.517 ================================ 00:34:36.517 Supported: No 00:34:36.517 00:34:36.517 Persistent Memory Region Support 00:34:36.517 ================================ 00:34:36.517 Supported: No 00:34:36.517 00:34:36.517 Admin Command Set Attributes 00:34:36.517 ============================ 00:34:36.517 Security Send/Receive: Not Supported 00:34:36.517 Format NVM: Not Supported 00:34:36.517 Firmware Activate/Download: Not Supported 00:34:36.517 Namespace Management: Not Supported 00:34:36.517 Device Self-Test: Not Supported 00:34:36.517 Directives: Not Supported 00:34:36.517 NVMe-MI: Not Supported 00:34:36.517 Virtualization Management: Not Supported 00:34:36.517 Doorbell Buffer Config: Not Supported 00:34:36.517 Get LBA Status Capability: Not Supported 00:34:36.517 Command & Feature Lockdown Capability: Not Supported 00:34:36.517 Abort Command Limit: 1 00:34:36.517 Async Event Request Limit: 1 00:34:36.517 Number of Firmware Slots: N/A 00:34:36.517 Firmware Slot 1 Read-Only: N/A 00:34:36.517 Firmware Activation Without Reset: N/A 00:34:36.517 Multiple Update Detection Support: N/A 00:34:36.517 Firmware Update Granularity: No Information Provided 00:34:36.517 Per-Namespace SMART Log: No 00:34:36.517 Asymmetric Namespace Access Log Page: Not Supported 00:34:36.517 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:36.517 Command Effects Log Page: Not Supported 00:34:36.517 Get Log Page Extended Data: Supported 00:34:36.517 Telemetry Log Pages: Not Supported 00:34:36.517 Persistent Event Log Pages: Not Supported 00:34:36.517 Supported Log Pages Log Page: May Support 00:34:36.517 Commands Supported & Effects Log Page: Not Supported 00:34:36.517 Feature Identifiers & Effects Log Page:May Support 00:34:36.517 NVMe-MI Commands & Effects Log Page: May Support 00:34:36.517 Data Area 4 for Telemetry Log: Not Supported 00:34:36.517 Error Log Page Entries Supported: 1 00:34:36.517 Keep Alive: Not Supported 00:34:36.517 00:34:36.517 NVM Command Set Attributes 00:34:36.517 ========================== 00:34:36.517 Submission Queue Entry Size 00:34:36.517 Max: 1 00:34:36.517 Min: 1 00:34:36.517 Completion Queue Entry Size 00:34:36.517 Max: 1 00:34:36.517 Min: 1 00:34:36.517 Number of Namespaces: 0 00:34:36.517 Compare Command: Not Supported 00:34:36.517 Write Uncorrectable Command: Not Supported 00:34:36.517 Dataset Management Command: Not Supported 00:34:36.517 Write Zeroes Command: Not Supported 00:34:36.517 Set Features Save Field: Not Supported 00:34:36.517 Reservations: Not Supported 00:34:36.517 Timestamp: Not Supported 00:34:36.517 Copy: Not Supported 00:34:36.517 Volatile Write Cache: Not Present 00:34:36.517 Atomic Write Unit (Normal): 1 00:34:36.517 Atomic Write Unit (PFail): 1 00:34:36.517 Atomic Compare & Write Unit: 1 00:34:36.517 Fused Compare & Write: Not Supported 00:34:36.517 Scatter-Gather List 00:34:36.517 SGL Command Set: Supported 00:34:36.517 SGL Keyed: Not Supported 00:34:36.517 SGL Bit Bucket Descriptor: Not Supported 00:34:36.517 SGL Metadata Pointer: Not Supported 00:34:36.517 Oversized SGL: Not Supported 00:34:36.517 SGL Metadata Address: Not Supported 00:34:36.517 SGL Offset: Supported 00:34:36.517 Transport SGL Data Block: Not Supported 00:34:36.517 Replay Protected Memory Block: Not Supported 00:34:36.517 00:34:36.517 Firmware Slot Information 00:34:36.517 ========================= 00:34:36.517 Active slot: 0 00:34:36.518 00:34:36.518 00:34:36.518 Error Log 00:34:36.518 ========= 00:34:36.518 00:34:36.518 Active Namespaces 00:34:36.518 ================= 00:34:36.518 Discovery Log Page 00:34:36.518 ================== 00:34:36.518 Generation Counter: 2 00:34:36.518 Number of Records: 2 00:34:36.518 Record Format: 0 00:34:36.518 00:34:36.518 Discovery Log Entry 0 00:34:36.518 ---------------------- 00:34:36.518 Transport Type: 3 (TCP) 00:34:36.518 Address Family: 1 (IPv4) 00:34:36.518 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:36.518 Entry Flags: 00:34:36.518 Duplicate Returned Information: 0 00:34:36.518 Explicit Persistent Connection Support for Discovery: 0 00:34:36.518 Transport Requirements: 00:34:36.518 Secure Channel: Not Specified 00:34:36.518 Port ID: 1 (0x0001) 00:34:36.518 Controller ID: 65535 (0xffff) 00:34:36.518 Admin Max SQ Size: 32 00:34:36.518 Transport Service Identifier: 4420 00:34:36.518 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:36.518 Transport Address: 10.0.0.1 00:34:36.518 Discovery Log Entry 1 00:34:36.518 ---------------------- 00:34:36.518 Transport Type: 3 (TCP) 00:34:36.518 Address Family: 1 (IPv4) 00:34:36.518 Subsystem Type: 2 (NVM Subsystem) 00:34:36.518 Entry Flags: 00:34:36.518 Duplicate Returned Information: 0 00:34:36.518 Explicit Persistent Connection Support for Discovery: 0 00:34:36.518 Transport Requirements: 00:34:36.518 Secure Channel: Not Specified 00:34:36.518 Port ID: 1 (0x0001) 00:34:36.518 Controller ID: 65535 (0xffff) 00:34:36.518 Admin Max SQ Size: 32 00:34:36.518 Transport Service Identifier: 4420 00:34:36.518 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:36.519 Transport Address: 10.0.0.1 00:34:36.519 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.519 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.519 get_feature(0x01) failed 00:34:36.519 get_feature(0x02) failed 00:34:36.519 get_feature(0x04) failed 00:34:36.519 ===================================================== 00:34:36.519 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.519 ===================================================== 00:34:36.519 Controller Capabilities/Features 00:34:36.519 ================================ 00:34:36.519 Vendor ID: 0000 00:34:36.519 Subsystem Vendor ID: 0000 00:34:36.519 Serial Number: e3d01465bd2bb3050baf 00:34:36.519 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:36.519 Firmware Version: 6.7.0-68 00:34:36.519 Recommended Arb Burst: 6 00:34:36.519 IEEE OUI Identifier: 00 00 00 00:34:36.519 Multi-path I/O 00:34:36.519 May have multiple subsystem ports: Yes 00:34:36.519 May have multiple controllers: Yes 00:34:36.519 Associated with SR-IOV VF: No 00:34:36.519 Max Data Transfer Size: Unlimited 00:34:36.519 Max Number of Namespaces: 1024 00:34:36.519 Max Number of I/O Queues: 128 00:34:36.519 NVMe Specification Version (VS): 1.3 00:34:36.519 NVMe Specification Version (Identify): 1.3 00:34:36.519 Maximum Queue Entries: 1024 00:34:36.519 Contiguous Queues Required: No 00:34:36.519 Arbitration Mechanisms Supported 00:34:36.519 Weighted Round Robin: Not Supported 00:34:36.519 Vendor Specific: Not Supported 00:34:36.519 Reset Timeout: 7500 ms 00:34:36.519 Doorbell Stride: 4 bytes 00:34:36.519 NVM Subsystem Reset: Not Supported 00:34:36.519 Command Sets Supported 00:34:36.519 NVM Command Set: Supported 00:34:36.519 Boot Partition: Not Supported 00:34:36.519 Memory Page Size Minimum: 4096 bytes 00:34:36.519 Memory Page Size Maximum: 4096 bytes 00:34:36.519 Persistent Memory Region: Not Supported 00:34:36.519 Optional Asynchronous Events Supported 00:34:36.519 Namespace Attribute Notices: Supported 00:34:36.519 Firmware Activation Notices: Not Supported 00:34:36.519 ANA Change Notices: Supported 00:34:36.519 PLE Aggregate Log Change Notices: Not Supported 00:34:36.520 LBA Status Info Alert Notices: Not Supported 00:34:36.520 EGE Aggregate Log Change Notices: Not Supported 00:34:36.520 Normal NVM Subsystem Shutdown event: Not Supported 00:34:36.520 Zone Descriptor Change Notices: Not Supported 00:34:36.520 Discovery Log Change Notices: Not Supported 00:34:36.520 Controller Attributes 00:34:36.520 128-bit Host Identifier: Supported 00:34:36.520 Non-Operational Permissive Mode: Not Supported 00:34:36.520 NVM Sets: Not Supported 00:34:36.520 Read Recovery Levels: Not Supported 00:34:36.520 Endurance Groups: Not Supported 00:34:36.520 Predictable Latency Mode: Not Supported 00:34:36.520 Traffic Based Keep ALive: Supported 00:34:36.520 Namespace Granularity: Not Supported 00:34:36.520 SQ Associations: Not Supported 00:34:36.520 UUID List: Not Supported 00:34:36.520 Multi-Domain Subsystem: Not Supported 00:34:36.520 Fixed Capacity Management: Not Supported 00:34:36.520 Variable Capacity Management: Not Supported 00:34:36.520 Delete Endurance Group: Not Supported 00:34:36.520 Delete NVM Set: Not Supported 00:34:36.520 Extended LBA Formats Supported: Not Supported 00:34:36.520 Flexible Data Placement Supported: Not Supported 00:34:36.520 00:34:36.520 Controller Memory Buffer Support 00:34:36.520 ================================ 00:34:36.520 Supported: No 00:34:36.520 00:34:36.520 Persistent Memory Region Support 00:34:36.520 ================================ 00:34:36.520 Supported: No 00:34:36.520 00:34:36.520 Admin Command Set Attributes 00:34:36.520 ============================ 00:34:36.520 Security Send/Receive: Not Supported 00:34:36.520 Format NVM: Not Supported 00:34:36.520 Firmware Activate/Download: Not Supported 00:34:36.520 Namespace Management: Not Supported 00:34:36.520 Device Self-Test: Not Supported 00:34:36.520 Directives: Not Supported 00:34:36.520 NVMe-MI: Not Supported 00:34:36.520 Virtualization Management: Not Supported 00:34:36.520 Doorbell Buffer Config: Not Supported 00:34:36.520 Get LBA Status Capability: Not Supported 00:34:36.520 Command & Feature Lockdown Capability: Not Supported 00:34:36.520 Abort Command Limit: 4 00:34:36.520 Async Event Request Limit: 4 00:34:36.520 Number of Firmware Slots: N/A 00:34:36.520 Firmware Slot 1 Read-Only: N/A 00:34:36.520 Firmware Activation Without Reset: N/A 00:34:36.520 Multiple Update Detection Support: N/A 00:34:36.520 Firmware Update Granularity: No Information Provided 00:34:36.520 Per-Namespace SMART Log: Yes 00:34:36.520 Asymmetric Namespace Access Log Page: Supported 00:34:36.520 ANA Transition Time : 10 sec 00:34:36.520 00:34:36.520 Asymmetric Namespace Access Capabilities 00:34:36.520 ANA Optimized State : Supported 00:34:36.520 ANA Non-Optimized State : Supported 00:34:36.520 ANA Inaccessible State : Supported 00:34:36.520 ANA Persistent Loss State : Supported 00:34:36.520 ANA Change State : Supported 00:34:36.520 ANAGRPID is not changed : No 00:34:36.520 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:36.520 00:34:36.521 ANA Group Identifier Maximum : 128 00:34:36.521 Number of ANA Group Identifiers : 128 00:34:36.521 Max Number of Allowed Namespaces : 1024 00:34:36.521 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:36.521 Command Effects Log Page: Supported 00:34:36.521 Get Log Page Extended Data: Supported 00:34:36.521 Telemetry Log Pages: Not Supported 00:34:36.521 Persistent Event Log Pages: Not Supported 00:34:36.521 Supported Log Pages Log Page: May Support 00:34:36.521 Commands Supported & Effects Log Page: Not Supported 00:34:36.521 Feature Identifiers & Effects Log Page:May Support 00:34:36.521 NVMe-MI Commands & Effects Log Page: May Support 00:34:36.521 Data Area 4 for Telemetry Log: Not Supported 00:34:36.521 Error Log Page Entries Supported: 128 00:34:36.521 Keep Alive: Supported 00:34:36.521 Keep Alive Granularity: 1000 ms 00:34:36.521 00:34:36.521 NVM Command Set Attributes 00:34:36.521 ========================== 00:34:36.521 Submission Queue Entry Size 00:34:36.521 Max: 64 00:34:36.521 Min: 64 00:34:36.521 Completion Queue Entry Size 00:34:36.521 Max: 16 00:34:36.521 Min: 16 00:34:36.521 Number of Namespaces: 1024 00:34:36.521 Compare Command: Not Supported 00:34:36.521 Write Uncorrectable Command: Not Supported 00:34:36.521 Dataset Management Command: Supported 00:34:36.521 Write Zeroes Command: Supported 00:34:36.521 Set Features Save Field: Not Supported 00:34:36.521 Reservations: Not Supported 00:34:36.521 Timestamp: Not Supported 00:34:36.521 Copy: Not Supported 00:34:36.521 Volatile Write Cache: Present 00:34:36.521 Atomic Write Unit (Normal): 1 00:34:36.521 Atomic Write Unit (PFail): 1 00:34:36.521 Atomic Compare & Write Unit: 1 00:34:36.521 Fused Compare & Write: Not Supported 00:34:36.521 Scatter-Gather List 00:34:36.521 SGL Command Set: Supported 00:34:36.521 SGL Keyed: Not Supported 00:34:36.521 SGL Bit Bucket Descriptor: Not Supported 00:34:36.521 SGL Metadata Pointer: Not Supported 00:34:36.521 Oversized SGL: Not Supported 00:34:36.521 SGL Metadata Address: Not Supported 00:34:36.521 SGL Offset: Supported 00:34:36.521 Transport SGL Data Block: Not Supported 00:34:36.521 Replay Protected Memory Block: Not Supported 00:34:36.521 00:34:36.521 Firmware Slot Information 00:34:36.521 ========================= 00:34:36.521 Active slot: 0 00:34:36.521 00:34:36.521 Asymmetric Namespace Access 00:34:36.521 =========================== 00:34:36.521 Change Count : 0 00:34:36.521 Number of ANA Group Descriptors : 1 00:34:36.521 ANA Group Descriptor : 0 00:34:36.522 ANA Group ID : 1 00:34:36.522 Number of NSID Values : 1 00:34:36.522 Change Count : 0 00:34:36.522 ANA State : 1 00:34:36.522 Namespace Identifier : 1 00:34:36.522 00:34:36.522 Commands Supported and Effects 00:34:36.522 ============================== 00:34:36.522 Admin Commands 00:34:36.522 -------------- 00:34:36.522 Get Log Page (02h): Supported 00:34:36.522 Identify (06h): Supported 00:34:36.522 Abort (08h): Supported 00:34:36.522 Set Features (09h): Supported 00:34:36.522 Get Features (0Ah): Supported 00:34:36.522 Asynchronous Event Request (0Ch): Supported 00:34:36.522 Keep Alive (18h): Supported 00:34:36.522 I/O Commands 00:34:36.522 ------------ 00:34:36.522 Flush (00h): Supported 00:34:36.522 Write (01h): Supported LBA-Change 00:34:36.522 Read (02h): Supported 00:34:36.522 Write Zeroes (08h): Supported LBA-Change 00:34:36.522 Dataset Management (09h): Supported 00:34:36.522 00:34:36.522 Error Log 00:34:36.522 ========= 00:34:36.522 Entry: 0 00:34:36.522 Error Count: 0x3 00:34:36.522 Submission Queue Id: 0x0 00:34:36.522 Command Id: 0x5 00:34:36.522 Phase Bit: 0 00:34:36.522 Status Code: 0x2 00:34:36.522 Status Code Type: 0x0 00:34:36.522 Do Not Retry: 1 00:34:36.522 Error Location: 0x28 00:34:36.522 LBA: 0x0 00:34:36.522 Namespace: 0x0 00:34:36.522 Vendor Log Page: 0x0 00:34:36.522 ----------- 00:34:36.522 Entry: 1 00:34:36.522 Error Count: 0x2 00:34:36.522 Submission Queue Id: 0x0 00:34:36.522 Command Id: 0x5 00:34:36.522 Phase Bit: 0 00:34:36.522 Status Code: 0x2 00:34:36.522 Status Code Type: 0x0 00:34:36.522 Do Not Retry: 1 00:34:36.522 Error Location: 0x28 00:34:36.522 LBA: 0x0 00:34:36.522 Namespace: 0x0 00:34:36.522 Vendor Log Page: 0x0 00:34:36.522 ----------- 00:34:36.522 Entry: 2 00:34:36.523 Error Count: 0x1 00:34:36.523 Submission Queue Id: 0x0 00:34:36.523 Command Id: 0x4 00:34:36.523 Phase Bit: 0 00:34:36.523 Status Code: 0x2 00:34:36.523 Status Code Type: 0x0 00:34:36.523 Do Not Retry: 1 00:34:36.523 Error Location: 0x28 00:34:36.523 LBA: 0x0 00:34:36.523 Namespace: 0x0 00:34:36.523 Vendor Log Page: 0x0 00:34:36.523 00:34:36.523 Number of Queues 00:34:36.523 ================ 00:34:36.523 Number of I/O Submission Queues: 128 00:34:36.523 Number of I/O Completion Queues: 128 00:34:36.523 00:34:36.523 ZNS Specific Controller Data 00:34:36.523 ============================ 00:34:36.523 Zone Append Size Limit: 0 00:34:36.523 00:34:36.523 00:34:36.523 Active Namespaces 00:34:36.523 ================= 00:34:36.523 get_feature(0x05) failed 00:34:36.523 Namespace ID:1 00:34:36.523 Command Set Identifier: NVM (00h) 00:34:36.523 Deallocate: Supported 00:34:36.523 Deallocated/Unwritten Error: Not Supported 00:34:36.523 Deallocated Read Value: Unknown 00:34:36.523 Deallocate in Write Zeroes: Not Supported 00:34:36.523 Deallocated Guard Field: 0xFFFF 00:34:36.523 Flush: Supported 00:34:36.523 Reservation: Not Supported 00:34:36.523 Namespace Sharing Capabilities: Multiple Controllers 00:34:36.523 Size (in LBAs): 1953525168 (931GiB) 00:34:36.523 Capacity (in LBAs): 1953525168 (931GiB) 00:34:36.523 Utilization (in LBAs): 1953525168 (931GiB) 00:34:36.523 UUID: 83364506-fff5-411d-abc7-17f641e5c1fb 00:34:36.523 Thin Provisioning: Not Supported 00:34:36.523 Per-NS Atomic Units: Yes 00:34:36.523 Atomic Boundary Size (Normal): 0 00:34:36.523 Atomic Boundary Size (PFail): 0 00:34:36.523 Atomic Boundary Offset: 0 00:34:36.523 NGUID/EUI64 Never Reused: No 00:34:36.523 ANA group ID: 1 00:34:36.523 Namespace Write Protected: No 00:34:36.523 Number of LBA Formats: 1 00:34:36.523 Current LBA Format: LBA Format #00 00:34:36.523 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:36.523 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:36.523 rmmod nvme_tcp 00:34:36.523 rmmod nvme_fabrics 00:34:36.523 05:23:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:36.523 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:36.523 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:36.524 05:23:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:39.053 05:23:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:39.989 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.989 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.989 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:41.006 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:41.006 00:34:41.006 real 0m9.568s 00:34:41.006 user 0m2.122s 00:34:41.006 sys 0m3.422s 00:34:41.006 05:23:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.006 05:23:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.006 ************************************ 00:34:41.006 END TEST nvmf_identify_kernel_target 00:34:41.006 ************************************ 00:34:41.006 05:23:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:41.006 05:23:47 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:41.006 05:23:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:41.006 05:23:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.006 05:23:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.006 ************************************ 00:34:41.006 START TEST nvmf_auth_host 00:34:41.006 ************************************ 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:41.006 * Looking for test storage... 00:34:41.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:41.006 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.007 05:23:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.265 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:41.265 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:41.265 05:23:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:41.265 05:23:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:43.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:43.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:43.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:43.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:43.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:34:43.165 00:34:43.165 --- 10.0.0.2 ping statistics --- 00:34:43.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.165 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:34:43.165 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:34:43.165 00:34:43.165 --- 10.0.0.1 ping statistics --- 00:34:43.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.166 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=846417 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 846417 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 846417 ']' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:43.166 05:23:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee902cddbaefcb13f028fa2e0760ccb2 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Jmp 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee902cddbaefcb13f028fa2e0760ccb2 0 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee902cddbaefcb13f028fa2e0760ccb2 0 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee902cddbaefcb13f028fa2e0760ccb2 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Jmp 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Jmp 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Jmp 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef7034374795446c8fd7aa104531101f904da6f84bbe6bcde0fc144a747a2bef 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gjn 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef7034374795446c8fd7aa104531101f904da6f84bbe6bcde0fc144a747a2bef 3 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef7034374795446c8fd7aa104531101f904da6f84bbe6bcde0fc144a747a2bef 3 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef7034374795446c8fd7aa104531101f904da6f84bbe6bcde0fc144a747a2bef 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:44.100 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gjn 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gjn 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gjn 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=194dcb7c264bbf2e405bf5b669c3efdb627dd3979aaca96f 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2Be 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 194dcb7c264bbf2e405bf5b669c3efdb627dd3979aaca96f 0 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 194dcb7c264bbf2e405bf5b669c3efdb627dd3979aaca96f 0 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=194dcb7c264bbf2e405bf5b669c3efdb627dd3979aaca96f 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2Be 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2Be 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2Be 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed08d73da4e48e7bb44e0b3f740b7769b261d142f3e95130 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Pgo 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed08d73da4e48e7bb44e0b3f740b7769b261d142f3e95130 2 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed08d73da4e48e7bb44e0b3f740b7769b261d142f3e95130 2 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed08d73da4e48e7bb44e0b3f740b7769b261d142f3e95130 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Pgo 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Pgo 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Pgo 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e662e1595b4e1bb405e5679285fb2f0 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HnO 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e662e1595b4e1bb405e5679285fb2f0 1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e662e1595b4e1bb405e5679285fb2f0 1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e662e1595b4e1bb405e5679285fb2f0 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HnO 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HnO 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HnO 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.359 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f424b2dbd5026f0b7991419681c225ab 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UO1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f424b2dbd5026f0b7991419681c225ab 1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f424b2dbd5026f0b7991419681c225ab 1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f424b2dbd5026f0b7991419681c225ab 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UO1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UO1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UO1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4b15b189b49ec6d0d7247078225c3b4227e08408aff74a1b 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CYr 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4b15b189b49ec6d0d7247078225c3b4227e08408aff74a1b 2 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4b15b189b49ec6d0d7247078225c3b4227e08408aff74a1b 2 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4b15b189b49ec6d0d7247078225c3b4227e08408aff74a1b 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:44.360 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CYr 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CYr 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CYr 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1be812d42ef12539b32ebb52060c413 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pcs 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1be812d42ef12539b32ebb52060c413 0 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1be812d42ef12539b32ebb52060c413 0 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1be812d42ef12539b32ebb52060c413 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pcs 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pcs 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pcs 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=15fb7e1ae0c3e7244ed3e1752d3f44254b09ca14647f6f1937b67c0ea15f026c 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GbB 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 15fb7e1ae0c3e7244ed3e1752d3f44254b09ca14647f6f1937b67c0ea15f026c 3 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 15fb7e1ae0c3e7244ed3e1752d3f44254b09ca14647f6f1937b67c0ea15f026c 3 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=15fb7e1ae0c3e7244ed3e1752d3f44254b09ca14647f6f1937b67c0ea15f026c 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GbB 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GbB 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GbB 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:44.618 05:23:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 846417 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 846417 ']' 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:44.618 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jmp 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gjn ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gjn 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2Be 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Pgo ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pgo 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HnO 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UO1 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UO1 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CYr 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pcs ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pcs 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GbB 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.876 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:44.877 05:23:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:46.249 Waiting for block devices as requested 00:34:46.249 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:46.249 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:46.249 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:46.249 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:46.516 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:46.516 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:46.516 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:46.516 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:46.772 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:46.772 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:46.772 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:46.772 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:47.029 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:47.029 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:47.029 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:47.029 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:47.286 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:47.851 No valid GPT data, bailing 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:47.851 00:34:47.851 Discovery Log Number of Records 2, Generation counter 2 00:34:47.851 =====Discovery Log Entry 0====== 00:34:47.851 trtype: tcp 00:34:47.851 adrfam: ipv4 00:34:47.851 subtype: current discovery subsystem 00:34:47.851 treq: not specified, sq flow control disable supported 00:34:47.851 portid: 1 00:34:47.851 trsvcid: 4420 00:34:47.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:47.851 traddr: 10.0.0.1 00:34:47.851 eflags: none 00:34:47.851 sectype: none 00:34:47.851 =====Discovery Log Entry 1====== 00:34:47.851 trtype: tcp 00:34:47.851 adrfam: ipv4 00:34:47.851 subtype: nvme subsystem 00:34:47.851 treq: not specified, sq flow control disable supported 00:34:47.851 portid: 1 00:34:47.851 trsvcid: 4420 00:34:47.851 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:47.851 traddr: 10.0.0.1 00:34:47.851 eflags: none 00:34:47.851 sectype: none 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.851 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.109 nvme0n1 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.109 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.367 nvme0n1 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.367 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.368 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.368 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.368 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.368 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 nvme0n1 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 nvme0n1 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.625 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.883 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.884 nvme0n1 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.884 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.142 nvme0n1 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.142 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.143 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.400 nvme0n1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.400 05:23:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.658 nvme0n1 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.658 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 nvme0n1 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.915 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.171 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.172 nvme0n1 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.172 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:50.429 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.430 nvme0n1 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.430 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.687 05:23:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 nvme0n1 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:50.978 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.979 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.236 nvme0n1 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.236 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.494 nvme0n1 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.494 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:51.750 05:23:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.750 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.751 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.007 nvme0n1 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.007 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.265 nvme0n1 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.265 05:23:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.830 nvme0n1 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.830 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.088 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.654 nvme0n1 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.654 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.655 05:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.219 nvme0n1 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.219 05:24:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.819 nvme0n1 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.819 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.386 nvme0n1 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.386 05:24:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.319 nvme0n1 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.319 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.576 05:24:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.507 nvme0n1 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.507 05:24:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.439 nvme0n1 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.439 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.697 05:24:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.631 nvme0n1 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.631 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.632 05:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.566 nvme0n1 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.566 05:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:00.566 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.567 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.825 nvme0n1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.825 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.084 nvme0n1 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.084 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.343 nvme0n1 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.343 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.602 nvme0n1 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.602 05:24:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.861 nvme0n1 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.861 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.120 nvme0n1 00:35:02.120 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.120 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.121 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.379 nvme0n1 00:35:02.379 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.379 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.379 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.379 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.379 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.380 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.638 nvme0n1 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.638 05:24:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.897 nvme0n1 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.897 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 nvme0n1 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:03.155 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.156 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.414 nvme0n1 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:03.414 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.415 05:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.673 nvme0n1 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.673 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.931 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.189 nvme0n1 00:35:04.189 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.189 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.190 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.448 nvme0n1 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.449 05:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.015 nvme0n1 00:35:05.015 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.015 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.016 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.582 nvme0n1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.582 05:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.149 nvme0n1 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.149 05:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.714 nvme0n1 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.714 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.279 nvme0n1 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.279 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.537 05:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.162 nvme0n1 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.162 05:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.120 nvme0n1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.120 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.121 05:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.054 nvme0n1 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.054 05:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.987 nvme0n1 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.987 05:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.919 nvme0n1 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.919 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.176 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.177 05:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.110 nvme0n1 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.110 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.368 nvme0n1 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:13.368 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.369 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.626 nvme0n1 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:13.626 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.627 05:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.627 nvme0n1 00:35:13.627 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.627 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.627 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.627 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.627 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.884 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.885 nvme0n1 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.885 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.143 nvme0n1 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.143 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.401 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.402 nvme0n1 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.402 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.661 05:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.661 nvme0n1 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.661 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.919 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.920 nvme0n1 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.920 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 nvme0n1 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.437 nvme0n1 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.437 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.696 05:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.979 nvme0n1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.979 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.237 nvme0n1 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.237 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.801 nvme0n1 00:35:16.801 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.801 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.801 05:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.801 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.801 05:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.801 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.058 nvme0n1 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.058 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.059 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.316 nvme0n1 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.316 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.317 05:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.881 nvme0n1 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.881 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.139 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.705 nvme0n1 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.705 05:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.705 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.272 nvme0n1 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:19.272 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.273 05:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.842 nvme0n1 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.842 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.406 nvme0n1 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWU5MDJjZGRiYWVmY2IxM2YwMjhmYTJlMDc2MGNjYjJ+fheu: 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWY3MDM0Mzc0Nzk1NDQ2YzhmZDdhYTEwNDUzMTEwMWY5MDRkYTZmODRiYmU2YmNkZTBmYzE0NGE3NDdhMmJlZgJZcMY=: 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.406 05:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.338 nvme0n1 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.338 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.596 05:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.574 nvme0n1 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2NjJlMTU5NWI0ZTFiYjQwNWU1Njc5Mjg1ZmIyZjAue272: 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQyNGIyZGJkNTAyNmYwYjc5OTE0MTk2ODFjMjI1YWKyUeUm: 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.574 05:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.508 nvme0n1 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIxNWIxODliNDllYzZkMGQ3MjQ3MDc4MjI1YzNiNDIyN2UwODQwOGFmZjc0YTFiGIQLEg==: 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFiZTgxMmQ0MmVmMTI1MzliMzJlYmI1MjA2MGM0MTNqbGAQ: 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.508 05:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.883 nvme0n1 00:35:24.883 05:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.883 05:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.883 05:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.883 05:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.883 05:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:24.883 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTVmYjdlMWFlMGMzZTcyNDRlZDNlMTc1MmQzZjQ0MjU0YjA5Y2ExNDY0N2Y2ZjE5MzdiNjdjMGVhMTVmMDI2Y3odxe8=: 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.884 05:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.820 nvme0n1 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk0ZGNiN2MyNjRiYmYyZTQwNWJmNWI2NjljM2VmZGI2MjdkZDM5NzlhYWNhOTZmqey2Lw==: 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQwOGQ3M2RhNGU0OGU3YmI0NGUwYjNmNzQwYjc3NjliMjYxZDE0MmYzZTk1MTMwYemvFA==: 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.820 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.820 request: 00:35:25.820 { 00:35:25.820 "name": "nvme0", 00:35:25.820 "trtype": "tcp", 00:35:25.820 "traddr": "10.0.0.1", 00:35:25.820 "adrfam": "ipv4", 00:35:25.820 "trsvcid": "4420", 00:35:25.820 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:25.820 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:25.820 "prchk_reftag": false, 00:35:25.820 "prchk_guard": false, 00:35:25.821 "hdgst": false, 00:35:25.821 "ddgst": false, 00:35:25.821 "method": "bdev_nvme_attach_controller", 00:35:25.821 "req_id": 1 00:35:25.821 } 00:35:25.821 Got JSON-RPC error response 00:35:25.821 response: 00:35:25.821 { 00:35:25.821 "code": -5, 00:35:25.821 "message": "Input/output error" 00:35:25.821 } 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.821 request: 00:35:25.821 { 00:35:25.821 "name": "nvme0", 00:35:25.821 "trtype": "tcp", 00:35:25.821 "traddr": "10.0.0.1", 00:35:25.821 "adrfam": "ipv4", 00:35:25.821 "trsvcid": "4420", 00:35:25.821 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:25.821 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:25.821 "prchk_reftag": false, 00:35:25.821 "prchk_guard": false, 00:35:25.821 "hdgst": false, 00:35:25.821 "ddgst": false, 00:35:25.821 "dhchap_key": "key2", 00:35:25.821 "method": "bdev_nvme_attach_controller", 00:35:25.821 "req_id": 1 00:35:25.821 } 00:35:25.821 Got JSON-RPC error response 00:35:25.821 response: 00:35:25.821 { 00:35:25.821 "code": -5, 00:35:25.821 "message": "Input/output error" 00:35:25.821 } 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.821 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.080 request: 00:35:26.080 { 00:35:26.080 "name": "nvme0", 00:35:26.080 "trtype": "tcp", 00:35:26.080 "traddr": "10.0.0.1", 00:35:26.080 "adrfam": "ipv4", 00:35:26.080 "trsvcid": "4420", 00:35:26.080 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:26.080 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:26.080 "prchk_reftag": false, 00:35:26.080 "prchk_guard": false, 00:35:26.080 "hdgst": false, 00:35:26.080 "ddgst": false, 00:35:26.080 "dhchap_key": "key1", 00:35:26.080 "dhchap_ctrlr_key": "ckey2", 00:35:26.080 "method": "bdev_nvme_attach_controller", 00:35:26.080 "req_id": 1 00:35:26.080 } 00:35:26.080 Got JSON-RPC error response 00:35:26.080 response: 00:35:26.080 { 00:35:26.080 "code": -5, 00:35:26.080 "message": "Input/output error" 00:35:26.080 } 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:26.080 rmmod nvme_tcp 00:35:26.080 rmmod nvme_fabrics 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 846417 ']' 00:35:26.080 05:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 846417 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 846417 ']' 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 846417 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 846417 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 846417' 00:35:26.081 killing process with pid 846417 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 846417 00:35:26.081 05:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 846417 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:27.458 05:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:29.359 05:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:30.733 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:30.733 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:30.733 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:31.668 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:31.668 05:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Jmp /tmp/spdk.key-null.2Be /tmp/spdk.key-sha256.HnO /tmp/spdk.key-sha384.CYr /tmp/spdk.key-sha512.GbB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:31.668 05:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:33.039 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:33.039 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:33.039 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:33.039 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:33.039 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:33.039 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:33.039 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:33.040 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:33.040 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:33.040 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:33.040 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:33.040 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:33.040 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:33.040 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:33.040 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:33.040 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:33.040 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:33.040 00:35:33.040 real 0m51.934s 00:35:33.040 user 0m49.589s 00:35:33.040 sys 0m6.040s 00:35:33.040 05:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:33.040 05:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.040 ************************************ 00:35:33.040 END TEST nvmf_auth_host 00:35:33.040 ************************************ 00:35:33.040 05:24:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:33.040 05:24:39 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:35:33.040 05:24:39 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:33.040 05:24:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:33.040 05:24:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:33.040 05:24:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.040 ************************************ 00:35:33.040 START TEST nvmf_digest 00:35:33.040 ************************************ 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:33.040 * Looking for test storage... 00:35:33.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:35:33.040 05:24:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.937 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:34.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:34.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:34.938 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:34.938 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:34.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:35:34.938 00:35:34.938 --- 10.0.0.2 ping statistics --- 00:35:34.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.938 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:35:34.938 00:35:34.938 --- 10.0.0.1 ping statistics --- 00:35:34.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.938 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:34.938 05:24:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.196 ************************************ 00:35:35.196 START TEST nvmf_digest_clean 00:35:35.196 ************************************ 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=856857 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 856857 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 856857 ']' 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.196 05:24:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.196 [2024-07-13 05:24:41.550672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:35.196 [2024-07-13 05:24:41.550814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.196 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.196 [2024-07-13 05:24:41.681763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.455 [2024-07-13 05:24:41.931694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.455 [2024-07-13 05:24:41.931770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.455 [2024-07-13 05:24:41.931799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.455 [2024-07-13 05:24:41.931825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.455 [2024-07-13 05:24:41.931855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.455 [2024-07-13 05:24:41.931912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.020 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:36.020 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:36.020 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:36.020 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:36.020 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.285 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.552 null0 00:35:36.552 [2024-07-13 05:24:42.914494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.552 [2024-07-13 05:24:42.938743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=857018 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 857018 /var/tmp/bperf.sock 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 857018 ']' 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:36.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:36.552 05:24:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.552 [2024-07-13 05:24:43.020953] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:36.552 [2024-07-13 05:24:43.021103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857018 ] 00:35:36.809 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.809 [2024-07-13 05:24:43.150297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.067 [2024-07-13 05:24:43.405536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.632 05:24:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:37.632 05:24:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:37.632 05:24:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:37.632 05:24:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:37.632 05:24:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:38.199 05:24:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.199 05:24:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.764 nvme0n1 00:35:38.764 05:24:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:38.764 05:24:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:38.764 Running I/O for 2 seconds... 00:35:40.672 00:35:40.672 Latency(us) 00:35:40.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.672 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:40.672 nvme0n1 : 2.01 13806.69 53.93 0.00 0.00 9258.83 4854.52 24272.59 00:35:40.672 =================================================================================================================== 00:35:40.672 Total : 13806.69 53.93 0.00 0.00 9258.83 4854.52 24272.59 00:35:40.672 0 00:35:40.672 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:40.672 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:40.672 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:40.672 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:40.672 | select(.opcode=="crc32c") 00:35:40.672 | "\(.module_name) \(.executed)"' 00:35:40.673 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 857018 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 857018 ']' 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 857018 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 857018 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 857018' 00:35:40.931 killing process with pid 857018 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 857018 00:35:40.931 Received shutdown signal, test time was about 2.000000 seconds 00:35:40.931 00:35:40.931 Latency(us) 00:35:40.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.931 =================================================================================================================== 00:35:40.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.931 05:24:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 857018 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=857679 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 857679 /var/tmp/bperf.sock 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 857679 ']' 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:42.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:42.306 05:24:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:42.306 [2024-07-13 05:24:48.564607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:42.306 [2024-07-13 05:24:48.564748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857679 ] 00:35:42.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:42.306 Zero copy mechanism will not be used. 00:35:42.306 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.306 [2024-07-13 05:24:48.722840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.564 [2024-07-13 05:24:48.976435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.130 05:24:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.130 05:24:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:43.130 05:24:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:43.130 05:24:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:43.130 05:24:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:43.694 05:24:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.694 05:24:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:44.261 nvme0n1 00:35:44.261 05:24:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:44.261 05:24:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:44.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:44.261 Zero copy mechanism will not be used. 00:35:44.261 Running I/O for 2 seconds... 00:35:46.789 00:35:46.789 Latency(us) 00:35:46.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:46.790 nvme0n1 : 2.00 3146.04 393.26 0.00 0.00 5078.80 1292.52 10728.49 00:35:46.790 =================================================================================================================== 00:35:46.790 Total : 3146.04 393.26 0.00 0.00 5078.80 1292.52 10728.49 00:35:46.790 0 00:35:46.790 05:24:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:46.790 05:24:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:46.790 05:24:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:46.790 05:24:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:46.790 | select(.opcode=="crc32c") 00:35:46.790 | "\(.module_name) \(.executed)"' 00:35:46.790 05:24:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 857679 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 857679 ']' 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 857679 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 857679 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 857679' 00:35:46.790 killing process with pid 857679 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 857679 00:35:46.790 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.790 00:35:46.790 Latency(us) 00:35:46.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.790 =================================================================================================================== 00:35:46.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.790 05:24:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 857679 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=858347 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 858347 /var/tmp/bperf.sock 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 858347 ']' 00:35:47.726 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.727 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.727 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.727 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.727 05:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.727 [2024-07-13 05:24:54.174361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:47.727 [2024-07-13 05:24:54.174503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858347 ] 00:35:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.984 [2024-07-13 05:24:54.302337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.242 [2024-07-13 05:24:54.555189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.807 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.807 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:48.807 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:48.807 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:48.807 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:49.371 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.371 05:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.627 nvme0n1 00:35:49.885 05:24:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:49.885 05:24:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.885 Running I/O for 2 seconds... 00:35:51.839 00:35:51.839 Latency(us) 00:35:51.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.839 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:51.839 nvme0n1 : 2.00 17214.97 67.25 0.00 0.00 7421.36 3325.35 14660.65 00:35:51.839 =================================================================================================================== 00:35:51.839 Total : 17214.97 67.25 0.00 0.00 7421.36 3325.35 14660.65 00:35:51.839 0 00:35:51.839 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:51.839 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:51.839 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:51.839 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:51.839 | select(.opcode=="crc32c") 00:35:51.839 | "\(.module_name) \(.executed)"' 00:35:51.839 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 858347 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 858347 ']' 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 858347 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858347 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858347' 00:35:52.098 killing process with pid 858347 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 858347 00:35:52.098 Received shutdown signal, test time was about 2.000000 seconds 00:35:52.098 00:35:52.098 Latency(us) 00:35:52.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.098 =================================================================================================================== 00:35:52.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:52.098 05:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 858347 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=858985 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 858985 /var/tmp/bperf.sock 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 858985 ']' 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:53.473 05:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:53.473 [2024-07-13 05:24:59.704437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:53.473 [2024-07-13 05:24:59.704614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858985 ] 00:35:53.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:53.473 Zero copy mechanism will not be used. 00:35:53.473 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.473 [2024-07-13 05:24:59.845921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.732 [2024-07-13 05:25:00.101609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.299 05:25:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:54.299 05:25:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:54.299 05:25:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:54.299 05:25:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:54.299 05:25:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:54.866 05:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.866 05:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.124 nvme0n1 00:35:55.124 05:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:55.124 05:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:55.381 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:55.381 Zero copy mechanism will not be used. 00:35:55.381 Running I/O for 2 seconds... 00:35:57.278 00:35:57.278 Latency(us) 00:35:57.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.278 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:57.278 nvme0n1 : 2.00 3611.19 451.40 0.00 0.00 4418.18 3203.98 9563.40 00:35:57.278 =================================================================================================================== 00:35:57.278 Total : 3611.19 451.40 0.00 0.00 4418.18 3203.98 9563.40 00:35:57.278 0 00:35:57.278 05:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:57.278 05:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:57.279 05:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:57.279 05:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:57.279 05:25:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:57.279 | select(.opcode=="crc32c") 00:35:57.279 | "\(.module_name) \(.executed)"' 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 858985 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 858985 ']' 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 858985 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858985 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858985' 00:35:57.842 killing process with pid 858985 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 858985 00:35:57.842 Received shutdown signal, test time was about 2.000000 seconds 00:35:57.842 00:35:57.842 Latency(us) 00:35:57.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.842 =================================================================================================================== 00:35:57.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.842 05:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 858985 00:35:58.774 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 856857 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 856857 ']' 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 856857 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856857 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856857' 00:35:58.775 killing process with pid 856857 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 856857 00:35:58.775 05:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 856857 00:36:00.146 00:36:00.146 real 0m24.986s 00:36:00.146 user 0m48.754s 00:36:00.146 sys 0m4.502s 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:00.146 ************************************ 00:36:00.146 END TEST nvmf_digest_clean 00:36:00.146 ************************************ 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:00.146 ************************************ 00:36:00.146 START TEST nvmf_digest_error 00:36:00.146 ************************************ 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=859837 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 859837 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 859837 ']' 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:00.146 05:25:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.146 [2024-07-13 05:25:06.581042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:00.146 [2024-07-13 05:25:06.581168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.403 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.403 [2024-07-13 05:25:06.713742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.661 [2024-07-13 05:25:06.957691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.661 [2024-07-13 05:25:06.957764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.661 [2024-07-13 05:25:06.957793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.661 [2024-07-13 05:25:06.957819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.661 [2024-07-13 05:25:06.957841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.661 [2024-07-13 05:25:06.957898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.226 [2024-07-13 05:25:07.544224] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.226 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.484 null0 00:36:01.484 [2024-07-13 05:25:07.928898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.484 [2024-07-13 05:25:07.953154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=859992 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 859992 /var/tmp/bperf.sock 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 859992 ']' 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:01.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:01.484 05:25:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.741 [2024-07-13 05:25:08.046403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:01.741 [2024-07-13 05:25:08.046568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859992 ] 00:36:01.741 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.741 [2024-07-13 05:25:08.186221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.999 [2024-07-13 05:25:08.437443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.567 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:02.567 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:02.567 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:02.567 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.133 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.392 nvme0n1 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:03.392 05:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:03.392 Running I/O for 2 seconds... 00:36:03.392 [2024-07-13 05:25:09.848315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.392 [2024-07-13 05:25:09.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.392 [2024-07-13 05:25:09.848451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.392 [2024-07-13 05:25:09.864159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.392 [2024-07-13 05:25:09.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.392 [2024-07-13 05:25:09.864253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.392 [2024-07-13 05:25:09.880796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.392 [2024-07-13 05:25:09.880846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.392 [2024-07-13 05:25:09.880884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.900301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.900359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.900387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.922024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.922078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.922105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.939059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.939103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.939130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.956207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.956254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.956282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.976422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.976470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.976499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:09.992254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:09.992302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:09.992331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:10.011795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:10.011892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:10.011942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:10.030552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:10.030611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:10.030642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:10.047114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.650 [2024-07-13 05:25:10.047190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.650 [2024-07-13 05:25:10.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.650 [2024-07-13 05:25:10.067636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.651 [2024-07-13 05:25:10.067695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.651 [2024-07-13 05:25:10.067724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.651 [2024-07-13 05:25:10.091248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.651 [2024-07-13 05:25:10.091298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.651 [2024-07-13 05:25:10.091328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.651 [2024-07-13 05:25:10.113448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.651 [2024-07-13 05:25:10.113497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.651 [2024-07-13 05:25:10.113537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.651 [2024-07-13 05:25:10.133589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.651 [2024-07-13 05:25:10.133638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.651 [2024-07-13 05:25:10.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.150210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.150264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.150294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.170100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.170141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.170165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.189705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.189753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.189782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.210277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.210325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.210354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.226282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.226329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.226358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.247010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.247064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.247089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.265755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.265802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.265831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.282067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.282122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.909 [2024-07-13 05:25:10.282147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.909 [2024-07-13 05:25:10.302686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.909 [2024-07-13 05:25:10.302734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.302762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.910 [2024-07-13 05:25:10.323228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.910 [2024-07-13 05:25:10.323275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.323305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.910 [2024-07-13 05:25:10.339394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.910 [2024-07-13 05:25:10.339441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.339472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.910 [2024-07-13 05:25:10.355063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.910 [2024-07-13 05:25:10.355116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.355141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.910 [2024-07-13 05:25:10.373330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.910 [2024-07-13 05:25:10.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.373407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.910 [2024-07-13 05:25:10.394018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:03.910 [2024-07-13 05:25:10.394060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.910 [2024-07-13 05:25:10.394086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.409327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.409373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.409402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.429813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.429860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.429920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.446539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.446587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.446617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.462108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.462148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.462188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.481885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.481949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.481991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.502912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.502970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.503013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.519376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.519424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.519453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.541081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.541137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.541163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.555588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.555635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.555664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.575072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.575127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.575153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.593764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.593811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.593840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.612138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.612181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.612225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.630040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.630082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.630109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.645824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.645879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.167 [2024-07-13 05:25:10.665726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.167 [2024-07-13 05:25:10.665773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.167 [2024-07-13 05:25:10.665802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.689344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.689396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.706260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.706331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.728087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.728133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.728175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.744917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.744962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.745002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.759697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.759767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.777728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.777786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.777812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.794678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.794733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.794758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.812419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.812463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.812489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.825763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.825816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.825841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.846080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.846126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.846176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.865188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.865244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.865270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.882055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.882100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.882126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.898107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.898166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.898194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.428 [2024-07-13 05:25:10.915823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.428 [2024-07-13 05:25:10.915890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.428 [2024-07-13 05:25:10.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:10.931930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:10.931975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:10.932003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:10.949273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:10.949317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:10.949359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:10.964071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:10.964129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:10.964155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:10.981280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:10.981324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:10.981350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:10.998386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:10.998442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:10.998483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.014384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.014428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.014455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.033776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.033843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.033890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.048451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.048508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.048547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.066688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.066759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.086018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.086063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.086090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.100751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.100794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.100820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.117833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.117905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.117933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.136207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.136251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.136276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.153424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.153469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.153495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.686 [2024-07-13 05:25:11.170555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.686 [2024-07-13 05:25:11.170612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.686 [2024-07-13 05:25:11.170638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.185402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.185445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.185471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.204872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.204931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.204959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.222500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.222556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.222582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.239755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.239799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.239826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.257250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.257293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.257319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.273685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.273741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.273766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.292731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.292776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.308050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.308122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.327826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.327892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.347160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.347205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.347231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.364599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.364643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.364669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.378823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.378883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.398824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.398886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.398913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.417439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.417512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.943 [2024-07-13 05:25:11.433154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:04.943 [2024-07-13 05:25:11.433213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.943 [2024-07-13 05:25:11.433239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.451664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.451722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.451748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.470589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.470634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.470661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.489541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.489596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.489624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.504425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.504481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.504507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.521883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.521926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.521953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.537370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.537426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.537452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.554840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.554904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.554945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.575917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.575976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.576020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.594643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.594688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.594714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.608221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.608277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.608301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.625936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.625991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-13 05:25:11.626027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.201 [2024-07-13 05:25:11.643526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.201 [2024-07-13 05:25:11.643569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-13 05:25:11.643595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.202 [2024-07-13 05:25:11.657228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.202 [2024-07-13 05:25:11.657282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-13 05:25:11.657307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.202 [2024-07-13 05:25:11.676369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.202 [2024-07-13 05:25:11.676426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-13 05:25:11.676451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.202 [2024-07-13 05:25:11.696126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.202 [2024-07-13 05:25:11.696186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-13 05:25:11.696213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.459 [2024-07-13 05:25:11.717112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.459 [2024-07-13 05:25:11.717159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.459 [2024-07-13 05:25:11.717185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 [2024-07-13 05:25:11.738278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.460 [2024-07-13 05:25:11.738325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.460 [2024-07-13 05:25:11.738351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 [2024-07-13 05:25:11.758683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.460 [2024-07-13 05:25:11.758735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.460 [2024-07-13 05:25:11.758766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 [2024-07-13 05:25:11.778965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.460 [2024-07-13 05:25:11.779007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.460 [2024-07-13 05:25:11.779034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 [2024-07-13 05:25:11.795312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.460 [2024-07-13 05:25:11.795370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.460 [2024-07-13 05:25:11.795400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 [2024-07-13 05:25:11.815738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:05.460 [2024-07-13 05:25:11.815785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.460 [2024-07-13 05:25:11.815814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.460 00:36:05.460 Latency(us) 00:36:05.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.460 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:05.460 nvme0n1 : 2.01 14060.15 54.92 0.00 0.00 9088.58 4563.25 29127.11 00:36:05.460 =================================================================================================================== 00:36:05.460 Total : 14060.15 54.92 0.00 0.00 9088.58 4563.25 29127.11 00:36:05.460 0 00:36:05.460 05:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:05.460 05:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:05.460 05:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:05.460 05:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:05.460 | .driver_specific 00:36:05.460 | .nvme_error 00:36:05.460 | .status_code 00:36:05.460 | .command_transient_transport_error' 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 859992 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 859992 ']' 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 859992 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859992 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859992' 00:36:05.718 killing process with pid 859992 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 859992 00:36:05.718 Received shutdown signal, test time was about 2.000000 seconds 00:36:05.718 00:36:05.718 Latency(us) 00:36:05.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.718 =================================================================================================================== 00:36:05.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.718 05:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 859992 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=860544 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 860544 /var/tmp/bperf.sock 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 860544 ']' 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:07.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:07.090 05:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:07.090 [2024-07-13 05:25:13.276770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:07.090 [2024-07-13 05:25:13.276926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860544 ] 00:36:07.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:07.090 Zero copy mechanism will not be used. 00:36:07.090 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.090 [2024-07-13 05:25:13.408660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.348 [2024-07-13 05:25:13.659178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.911 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:07.911 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:07.911 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:07.911 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:08.168 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:08.425 nvme0n1 00:36:08.425 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:08.425 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.425 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:08.425 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.683 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:08.683 05:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:08.683 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:08.683 Zero copy mechanism will not be used. 00:36:08.683 Running I/O for 2 seconds... 00:36:08.683 [2024-07-13 05:25:15.035898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.036029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.036062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.047593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.047646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.047676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.058796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.058845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.058884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.070353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.070402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.070432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.081753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.081802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.081831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.092726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.092774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.092804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.103598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.103677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.114662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.114710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.125527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.125575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.125605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.136263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.136312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.136342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.147079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.147123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.147150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.157958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.158044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.168654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.168702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.168731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.683 [2024-07-13 05:25:15.179511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.683 [2024-07-13 05:25:15.179559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.683 [2024-07-13 05:25:15.179587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.190922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.190981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.191009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.201945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.202030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.212780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.212827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.212856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.223579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.223626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.223655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.234429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.234477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.245308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.245356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.245385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.256144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.256187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.256231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.267220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.267270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.267299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.277983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.278026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.278051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.288805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.288852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.288915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.299722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.299771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.299808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.310460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.310510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.310539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.321436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.321485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.321515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.332361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.332409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.343327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.343375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.343403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.354177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.354224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.354253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.364981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.365038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.375757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.375803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.375831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.386754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.386801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.386830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.397517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.397564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.397593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.408295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.408371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.419116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.419158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.419199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.429815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:08.942 [2024-07-13 05:25:15.429863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.942 [2024-07-13 05:25:15.429919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:08.942 [2024-07-13 05:25:15.440788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.440836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.440876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.451787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.451835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.451863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.462642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.462691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.462720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.473550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.473627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.484432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.484485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.484528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.495500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.495559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.495588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.506453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.506510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.506539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.517607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.517657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.528521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.528569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.528599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.539517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.539565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.539594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.550957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.551002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.551028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.561839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.561906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.561950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.572818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.572883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.572927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.583758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.583806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.583834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.594552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.594601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.594630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.605407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.605454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.605482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.616345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.616394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.616422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.627180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.627236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.627266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.637956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.638012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.638038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.648657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.648705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.648733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.659416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.659463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.670250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.670299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.670338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.681076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.681119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.681145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.201 [2024-07-13 05:25:15.691787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.201 [2024-07-13 05:25:15.691836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.201 [2024-07-13 05:25:15.691873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.702727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.702775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.702804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.713618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.713665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.713693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.724522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.724570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.724599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.735300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.735347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.735376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.746041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.746098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.746125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.756694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.756741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.756769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.767502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.767549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.767578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.778328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.778375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.778404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.789134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.789176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.799955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.800012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.810635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.810684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.810712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.821373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.821423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.821453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.832166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.832227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.832257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.842995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.843050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.843076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.853721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.853768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.853805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.864423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.864469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.864497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.875227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.875268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.885877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.885936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.885975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.896474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.896525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.896560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.907353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.907402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.907431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.460 [2024-07-13 05:25:15.918133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.460 [2024-07-13 05:25:15.918189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.460 [2024-07-13 05:25:15.918232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.461 [2024-07-13 05:25:15.929249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.461 [2024-07-13 05:25:15.929295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.461 [2024-07-13 05:25:15.929323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.461 [2024-07-13 05:25:15.940105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.461 [2024-07-13 05:25:15.940163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.461 [2024-07-13 05:25:15.940188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.461 [2024-07-13 05:25:15.950861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.461 [2024-07-13 05:25:15.950929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.461 [2024-07-13 05:25:15.950955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:15.962021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:15.962065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:15.962091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:15.972750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:15.972799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:15.972827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:15.983569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:15.983616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:15.983646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:15.994277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:15.994324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:15.994353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:16.005037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:16.005094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:16.005120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:16.015839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:16.015900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.719 [2024-07-13 05:25:16.015946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.719 [2024-07-13 05:25:16.026636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.719 [2024-07-13 05:25:16.026683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.026711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.037450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.037497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.037535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.048178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.048239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.048267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.059067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.059123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.059149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.069828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.069914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.069945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.080584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.080631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.080676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.091371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.091418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.091447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.102148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.102210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.102239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.112818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.112919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.123726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.123773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.123801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.134567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.134615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.145358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.145405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.145433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.156073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.156128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.156154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.166863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.166935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.166960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.177556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.177604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.188406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.188453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.188481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.199099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.199157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.199183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.720 [2024-07-13 05:25:16.209941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.720 [2024-07-13 05:25:16.209998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.720 [2024-07-13 05:25:16.210024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.221240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.221290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.221328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.232527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.232604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.243750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.243799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.243828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.254607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.254684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.265640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.265686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.265723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.276528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.276577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.287373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.287421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.287451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.298102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.298186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.308931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.308975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.309001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.319795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.319842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.319883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.330605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.330652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.330680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.341641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.341689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.341717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.352818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.352885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.352932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.363648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.363695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.363724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.374454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.374501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.385325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.385373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.385402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.396169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.396216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.396245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.406983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.407026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.407061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.417988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.418044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.418070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.429037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.429080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.429106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.440027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.440083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.450926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.450984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.451011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.461773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.461822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.461850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.979 [2024-07-13 05:25:16.472474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:09.979 [2024-07-13 05:25:16.472522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.979 [2024-07-13 05:25:16.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.483485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.483534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.483563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.494208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.494256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.494284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.504990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.505047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.505073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.516095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.516153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.516180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.526980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.527022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.527063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.537936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.537994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.538021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.548735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.548801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.548830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.559577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.559624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.559653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.570278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.570326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.570355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.580837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.580896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.591755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.591803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.602487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.602536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.602564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.612845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.612897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.612924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.622728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.622785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.622811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.632556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.632613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.642577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.642635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.642661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.652673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.652721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.652750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.663593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.663641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.663669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.674465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.674541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.685203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.685272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.685302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.238 [2024-07-13 05:25:16.696037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.238 [2024-07-13 05:25:16.696094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.238 [2024-07-13 05:25:16.696122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.239 [2024-07-13 05:25:16.706732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.239 [2024-07-13 05:25:16.706780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.239 [2024-07-13 05:25:16.706809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.239 [2024-07-13 05:25:16.717471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.239 [2024-07-13 05:25:16.717519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.239 [2024-07-13 05:25:16.717548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.239 [2024-07-13 05:25:16.728213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.239 [2024-07-13 05:25:16.728261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.239 [2024-07-13 05:25:16.728290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.739359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.739409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.739438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.750382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.750430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.750459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.761241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.761289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.761317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.772077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.772120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.772154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.782883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.782930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.782971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.793518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.793566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.793594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.804289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.804335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.804363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.815187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.815246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.815276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.825969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.826027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.826053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.836677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.836725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.836754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.847574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.847621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.847650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.858257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.858304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.858333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.869004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.869073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.869100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.879801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.879847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.879885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.890524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.890600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.901343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.901391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.901420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.912228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.912277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.912306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.923071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.923113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.923139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.933854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.933932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.933959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.944721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.944768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.944796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.955500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.955548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.955587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.966352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.966400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.966429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.977231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.977277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.977306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.497 [2024-07-13 05:25:16.987985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.497 [2024-07-13 05:25:16.988041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.497 [2024-07-13 05:25:16.988068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.756 [2024-07-13 05:25:16.999275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.756 [2024-07-13 05:25:16.999325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.756 [2024-07-13 05:25:16.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.756 [2024-07-13 05:25:17.010397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.756 [2024-07-13 05:25:17.010445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.756 [2024-07-13 05:25:17.010473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.756 [2024-07-13 05:25:17.021384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:10.756 [2024-07-13 05:25:17.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.756 [2024-07-13 05:25:17.021463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.756 00:36:10.756 Latency(us) 00:36:10.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.756 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:10.756 nvme0n1 : 2.00 2859.25 357.41 0.00 0.00 5588.55 4781.70 12524.66 00:36:10.756 =================================================================================================================== 00:36:10.756 Total : 2859.25 357.41 0.00 0.00 5588.55 4781.70 12524.66 00:36:10.756 0 00:36:10.756 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:10.756 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:10.756 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:10.756 | .driver_specific 00:36:10.756 | .nvme_error 00:36:10.756 | .status_code 00:36:10.756 | .command_transient_transport_error' 00:36:10.756 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 184 > 0 )) 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 860544 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 860544 ']' 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 860544 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860544 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860544' 00:36:11.014 killing process with pid 860544 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 860544 00:36:11.014 Received shutdown signal, test time was about 2.000000 seconds 00:36:11.014 00:36:11.014 Latency(us) 00:36:11.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.014 =================================================================================================================== 00:36:11.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:11.014 05:25:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 860544 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=861200 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 861200 /var/tmp/bperf.sock 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 861200 ']' 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:11.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:11.948 05:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:11.948 [2024-07-13 05:25:18.438913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:11.948 [2024-07-13 05:25:18.439062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861200 ] 00:36:12.206 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.206 [2024-07-13 05:25:18.561627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.464 [2024-07-13 05:25:18.807212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.029 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:13.029 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:13.029 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:13.029 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.287 05:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.545 nvme0n1 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:13.804 05:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.804 Running I/O for 2 seconds... 00:36:13.804 [2024-07-13 05:25:20.184646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:13.804 [2024-07-13 05:25:20.186007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.186060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.201074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:13.804 [2024-07-13 05:25:20.202357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.202411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.217353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:13.804 [2024-07-13 05:25:20.218553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.218593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.235944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:13.804 [2024-07-13 05:25:20.238352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.238416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.251393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:36:13.804 [2024-07-13 05:25:20.253151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.266255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:36:13.804 [2024-07-13 05:25:20.268780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.268824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.281045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:13.804 [2024-07-13 05:25:20.282200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.282239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:13.804 [2024-07-13 05:25:20.297053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:36:13.804 [2024-07-13 05:25:20.298212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.804 [2024-07-13 05:25:20.298250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.313950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:36:14.062 [2024-07-13 05:25:20.315027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.315066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.329750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:36:14.062 [2024-07-13 05:25:20.330800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.330840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.345556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:14.062 [2024-07-13 05:25:20.346670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.346724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.361476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:36:14.062 [2024-07-13 05:25:20.362518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.362572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.377312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:14.062 [2024-07-13 05:25:20.378370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.378424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.395056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:14.062 [2024-07-13 05:25:20.396943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.396996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.409691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:36:14.062 [2024-07-13 05:25:20.411011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.411050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.425680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:36:14.062 [2024-07-13 05:25:20.426919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.426958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.443788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:36:14.062 [2024-07-13 05:25:20.445984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.446038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.458433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:36:14.062 [2024-07-13 05:25:20.460184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.460236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.472848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:36:14.062 [2024-07-13 05:25:20.475480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.475523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.487637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:14.062 [2024-07-13 05:25:20.488695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.488748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.503987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:36:14.062 [2024-07-13 05:25:20.505227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.505287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.518942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:14.062 [2024-07-13 05:25:20.520175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.520227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.536405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:36:14.062 [2024-07-13 05:25:20.537852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.537913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.062 [2024-07-13 05:25:20.551067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:36:14.062 [2024-07-13 05:25:20.552510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-07-13 05:25:20.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.568391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:14.320 [2024-07-13 05:25:20.570024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.570077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.583221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:14.320 [2024-07-13 05:25:20.584286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.584324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.599253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:14.320 [2024-07-13 05:25:20.600287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.600331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.615675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:14.320 [2024-07-13 05:25:20.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.616975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.631734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:36:14.320 [2024-07-13 05:25:20.632955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.633008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.647603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:14.320 [2024-07-13 05:25:20.648885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.648939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.665615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:36:14.320 [2024-07-13 05:25:20.667712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.667750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.680330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:36:14.320 [2024-07-13 05:25:20.681785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.681838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.696053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:36:14.320 [2024-07-13 05:25:20.697507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.697547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.710764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:14.320 [2024-07-13 05:25:20.712260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.712298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.728388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:36:14.320 [2024-07-13 05:25:20.730086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.730125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.744449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:36:14.320 [2024-07-13 05:25:20.746160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.746204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.762527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:36:14.320 [2024-07-13 05:25:20.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.765233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.773674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:36:14.320 [2024-07-13 05:25:20.774914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.774952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.788633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:36:14.320 [2024-07-13 05:25:20.789844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.320 [2024-07-13 05:25:20.789907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:14.320 [2024-07-13 05:25:20.806173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:14.321 [2024-07-13 05:25:20.807607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.321 [2024-07-13 05:25:20.807660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.823208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:14.578 [2024-07-13 05:25:20.824845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.824911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.838088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:14.578 [2024-07-13 05:25:20.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.839739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.855515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:36:14.578 [2024-07-13 05:25:20.857367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.857420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.871782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:14.578 [2024-07-13 05:25:20.873875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.873939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.886718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:14.578 [2024-07-13 05:25:20.888735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.888772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.901305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:36:14.578 [2024-07-13 05:25:20.902699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.902752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.917137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:14.578 [2024-07-13 05:25:20.918638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.918682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.935264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:14.578 [2024-07-13 05:25:20.937746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.937800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.949935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:36:14.578 [2024-07-13 05:25:20.951652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.951707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.964554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:36:14.578 [2024-07-13 05:25:20.967400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.967444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.979560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:14.578 [2024-07-13 05:25:20.980818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.980884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:20.995995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:36:14.578 [2024-07-13 05:25:20.997429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:20.997469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:21.011134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:36:14.578 [2024-07-13 05:25:21.012565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:21.012603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:21.028904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:14.578 [2024-07-13 05:25:21.030563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:21.030601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:21.044893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:14.578 [2024-07-13 05:25:21.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:21.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:21.060604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:36:14.578 [2024-07-13 05:25:21.062257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.578 [2024-07-13 05:25:21.062297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.578 [2024-07-13 05:25:21.076691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:36:14.835 [2024-07-13 05:25:21.078621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.835 [2024-07-13 05:25:21.078674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.835 [2024-07-13 05:25:21.093135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:36:14.835 [2024-07-13 05:25:21.094785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.835 [2024-07-13 05:25:21.094824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.835 [2024-07-13 05:25:21.108843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:36:14.835 [2024-07-13 05:25:21.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.835 [2024-07-13 05:25:21.110560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.123529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:14.836 [2024-07-13 05:25:21.125232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.125289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.141096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:36:14.836 [2024-07-13 05:25:21.142949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.157316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:14.836 [2024-07-13 05:25:21.159379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.159433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.170181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:14.836 [2024-07-13 05:25:21.171365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.171418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.185896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:14.836 [2024-07-13 05:25:21.187195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.202163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:36:14.836 [2024-07-13 05:25:21.203359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.203398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.220371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:36:14.836 [2024-07-13 05:25:21.222630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.222684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.235046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:14.836 [2024-07-13 05:25:21.236691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.236744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.251181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:36:14.836 [2024-07-13 05:25:21.252772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.252828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.267197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:36:14.836 [2024-07-13 05:25:21.269099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.269138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.283025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:14.836 [2024-07-13 05:25:21.284899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.284953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.298897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:14.836 [2024-07-13 05:25:21.300762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.300816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.313450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:36:14.836 [2024-07-13 05:25:21.316170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.316229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.836 [2024-07-13 05:25:21.328115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:36:14.836 [2024-07-13 05:25:21.329362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.836 [2024-07-13 05:25:21.329415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.345171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:15.093 [2024-07-13 05:25:21.346597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.346636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.360080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:36:15.093 [2024-07-13 05:25:21.361501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.361539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.377664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:15.093 [2024-07-13 05:25:21.379293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.379347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.393790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:36:15.093 [2024-07-13 05:25:21.395658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.395711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.409831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:36:15.093 [2024-07-13 05:25:21.411704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.411742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.426006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:15.093 [2024-07-13 05:25:21.428083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.428137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.440956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:36:15.093 [2024-07-13 05:25:21.442972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.443025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.455613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:15.093 [2024-07-13 05:25:21.457027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.457087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.471001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:15.093 [2024-07-13 05:25:21.472393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.472430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.487061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:36:15.093 [2024-07-13 05:25:21.488706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.488745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.501913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:15.093 [2024-07-13 05:25:21.503507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.503545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.519279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:15.093 [2024-07-13 05:25:21.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.521253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.534942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:15.093 [2024-07-13 05:25:21.536774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.536827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.549559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:15.093 [2024-07-13 05:25:21.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.551382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.564158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:36:15.093 [2024-07-13 05:25:21.565357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.565409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:15.093 [2024-07-13 05:25:21.580051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:15.093 [2024-07-13 05:25:21.581210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.093 [2024-07-13 05:25:21.581249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.598815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:36:15.350 [2024-07-13 05:25:21.601080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.350 [2024-07-13 05:25:21.601134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.613361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:15.350 [2024-07-13 05:25:21.615005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.350 [2024-07-13 05:25:21.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.629568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:36:15.350 [2024-07-13 05:25:21.631002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.350 [2024-07-13 05:25:21.631041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.647594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:36:15.350 [2024-07-13 05:25:21.650241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.350 [2024-07-13 05:25:21.650295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.658820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:15.350 [2024-07-13 05:25:21.660015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.350 [2024-07-13 05:25:21.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:15.350 [2024-07-13 05:25:21.674754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:36:15.351 [2024-07-13 05:25:21.675965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.676018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.691027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:36:15.351 [2024-07-13 05:25:21.692560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.692613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.706134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:15.351 [2024-07-13 05:25:21.707522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.707576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.723586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:15.351 [2024-07-13 05:25:21.725271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.725326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.739760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:36:15.351 [2024-07-13 05:25:21.741580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.755687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:15.351 [2024-07-13 05:25:21.757502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.757555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.770258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:15.351 [2024-07-13 05:25:21.772085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.772141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.784856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:36:15.351 [2024-07-13 05:25:21.786079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.801346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:15.351 [2024-07-13 05:25:21.802523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.802563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.818124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:36:15.351 [2024-07-13 05:25:21.819603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.819643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:15.351 [2024-07-13 05:25:21.834574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:36:15.351 [2024-07-13 05:25:21.836022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.351 [2024-07-13 05:25:21.836061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.850084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:36:15.660 [2024-07-13 05:25:21.851557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.851599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.868368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:36:15.660 [2024-07-13 05:25:21.870024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.870073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.885088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:36:15.660 [2024-07-13 05:25:21.886946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.886986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.900390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:36:15.660 [2024-07-13 05:25:21.902246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.915450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:15.660 [2024-07-13 05:25:21.916646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.916685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.931960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:36:15.660 [2024-07-13 05:25:21.933158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.933215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.950658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:36:15.660 [2024-07-13 05:25:21.952944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.952984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.965878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:15.660 [2024-07-13 05:25:21.967542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.967582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:21.982646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:15.660 [2024-07-13 05:25:21.984352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:21.984410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:22.001656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:36:15.660 [2024-07-13 05:25:22.004428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:22.004468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:22.013275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:15.660 [2024-07-13 05:25:22.014492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:22.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:15.660 [2024-07-13 05:25:22.028545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:36:15.660 [2024-07-13 05:25:22.029759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.660 [2024-07-13 05:25:22.029799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.046392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:36:15.661 [2024-07-13 05:25:22.047821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.047861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.063013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:15.661 [2024-07-13 05:25:22.064659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.064699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.078547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:15.661 [2024-07-13 05:25:22.080247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.080303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.096799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:15.661 [2024-07-13 05:25:22.098722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.113749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:36:15.661 [2024-07-13 05:25:22.115821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.115861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.128941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:36:15.661 [2024-07-13 05:25:22.130977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.131016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:15.661 [2024-07-13 05:25:22.144059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:36:15.661 [2024-07-13 05:25:22.145498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.661 [2024-07-13 05:25:22.145544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:15.923 [2024-07-13 05:25:22.161467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:15.923 [2024-07-13 05:25:22.162894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.923 [2024-07-13 05:25:22.162947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:15.923 00:36:15.923 Latency(us) 00:36:15.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:15.923 nvme0n1 : 2.01 15987.65 62.45 0.00 0.00 7989.29 3640.89 20194.80 00:36:15.923 =================================================================================================================== 00:36:15.923 Total : 15987.65 62.45 0.00 0.00 7989.29 3640.89 20194.80 00:36:15.923 0 00:36:15.923 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:15.923 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:15.923 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:15.923 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:15.923 | .driver_specific 00:36:15.923 | .nvme_error 00:36:15.923 | .status_code 00:36:15.923 | .command_transient_transport_error' 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 861200 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 861200 ']' 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 861200 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861200 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861200' 00:36:16.181 killing process with pid 861200 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 861200 00:36:16.181 Received shutdown signal, test time was about 2.000000 seconds 00:36:16.181 00:36:16.181 Latency(us) 00:36:16.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.181 =================================================================================================================== 00:36:16.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:16.181 05:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 861200 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=861861 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 861861 /var/tmp/bperf.sock 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 861861 ']' 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:17.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:17.113 05:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:17.371 [2024-07-13 05:25:23.630379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:17.371 [2024-07-13 05:25:23.630519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861861 ] 00:36:17.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.371 Zero copy mechanism will not be used. 00:36:17.371 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.371 [2024-07-13 05:25:23.758346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.629 [2024-07-13 05:25:24.011609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.196 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:18.196 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:18.196 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:18.196 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:18.453 05:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.018 nvme0n1 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:19.018 05:25:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:19.018 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:19.018 Zero copy mechanism will not be used. 00:36:19.018 Running I/O for 2 seconds... 00:36:19.018 [2024-07-13 05:25:25.400398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.400862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.400941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.411383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.411827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.411885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.422738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.423181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.423226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.432989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.433464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.433508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.444611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.445101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.445154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.455479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.455933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.455973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.466390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.466858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.466925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.478407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.478829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.478892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.489060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.489240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.489284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.498969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.499372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.499417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.018 [2024-07-13 05:25:25.509093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.018 [2024-07-13 05:25:25.509571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.018 [2024-07-13 05:25:25.509612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.519466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.519948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.519988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.529548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.530005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.530045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.538663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.539066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.539119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.548121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.548540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.548579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.557234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.557672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.557710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.567543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.567963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.568026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.577238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.577614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.577652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.586376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.586780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.586820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.595503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.595925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.595966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.605838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.606300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.606352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.615605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.616132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.616193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.625649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.626159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.626208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.635777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.636251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.636288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.645841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.646366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.646420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.656295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.656798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.656855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.666620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.667108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.667147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.676636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.677080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.677135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.686492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.687024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.687079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.697131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.697636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.697690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.706970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.707338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.707393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.716311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.716752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.716804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.726485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.727016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.736708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.276 [2024-07-13 05:25:25.737231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.276 [2024-07-13 05:25:25.737296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.276 [2024-07-13 05:25:25.745940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.277 [2024-07-13 05:25:25.746278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.277 [2024-07-13 05:25:25.746318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.277 [2024-07-13 05:25:25.755747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.277 [2024-07-13 05:25:25.756278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.277 [2024-07-13 05:25:25.756330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.277 [2024-07-13 05:25:25.765314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.277 [2024-07-13 05:25:25.765652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.277 [2024-07-13 05:25:25.765692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.277 [2024-07-13 05:25:25.774302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.277 [2024-07-13 05:25:25.774663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.277 [2024-07-13 05:25:25.774703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.783929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.784360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.784398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.793920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.794274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.794327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.803216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.803564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.803619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.812202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.812674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.812727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.822350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.822822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.832279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.832697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.832736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.841836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.842354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.850890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.851282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.851321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.860463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.860989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.861027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.870329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.870736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.870790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.880231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.880667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.880706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.890080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.890501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.890539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.900095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.900487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.900542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.909595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.910024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.910079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.919081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.919484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.919539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.929270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.929682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.929735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.938623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.939059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.939097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.948423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.948914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.948952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.958381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.958843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.958919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.968281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.968702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.968740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.978048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.978485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.978523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.987574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.987955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.988011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:25.995580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:25.996063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:25.996116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:26.004898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:26.005241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.534 [2024-07-13 05:25:26.005282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.534 [2024-07-13 05:25:26.014228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.534 [2024-07-13 05:25:26.014567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.535 [2024-07-13 05:25:26.014607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.535 [2024-07-13 05:25:26.022640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.535 [2024-07-13 05:25:26.023019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.535 [2024-07-13 05:25:26.023058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.535 [2024-07-13 05:25:26.031278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.535 [2024-07-13 05:25:26.031606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.535 [2024-07-13 05:25:26.031646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.040580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.040942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.040981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.049354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.049694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.049749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.058033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.058419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.058456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.067478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.067811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.067850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.076423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.076775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.076815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.085538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.085931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.085970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.094145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.102655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.102999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.103038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.111909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.112296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.112348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.122363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.122881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.122920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.131688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.132126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.140761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.141183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.793 [2024-07-13 05:25:26.141244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.793 [2024-07-13 05:25:26.150354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.793 [2024-07-13 05:25:26.150828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.150890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.159968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.160326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.160366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.169109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.169483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.169538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.178339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.178786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.178839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.188029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.188428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.188480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.196525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.197057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.197109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.205519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.205939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.205994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.214591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.215035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.215088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.224334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.224671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.224709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.232317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.232687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.232745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.241649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.242054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.242093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.250161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.250511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.250550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.258396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.258737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.258776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.266880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.267190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.267229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.274811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.275145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.275185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.283311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:19.794 [2024-07-13 05:25:26.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.794 [2024-07-13 05:25:26.283703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.794 [2024-07-13 05:25:26.291996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.292314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.292361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.300457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.300819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.300883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.308703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.309116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.309156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.317158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.317550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.317589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.325301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.325625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.325678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.334278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.334645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.334702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.343256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.343699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.343753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.351596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.351989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.352028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.360520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.361003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.368962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.369402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.369455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.377383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.377820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.377858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.386216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.386621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.396305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.396675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.396730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.405670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.406067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.406107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.414532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.414861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.414914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.423497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.423922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.423962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.432127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.053 [2024-07-13 05:25:26.432522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.053 [2024-07-13 05:25:26.432575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.053 [2024-07-13 05:25:26.441795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.442181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.442241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.449825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.450074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.458905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.459255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.459308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.467796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.468204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.468256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.476246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.476578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.476617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.485220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.485583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.485622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.493888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.494224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.502246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.502566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.502619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.510683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.511129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.511169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.519179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.519499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.519539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.527470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.527837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.535627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.535984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.536025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.054 [2024-07-13 05:25:26.545050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.054 [2024-07-13 05:25:26.545442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.054 [2024-07-13 05:25:26.545498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.553637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.554004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.554046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.562051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.562369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.570206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.570586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.578936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.579313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.579369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.587664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.588027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.588076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.596891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.597269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.597325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.607766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.608250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.608292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.617456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.617932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.617973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.627175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.627612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.627653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.635954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.636308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.636349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.644764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.645123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.645164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.653501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.653978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.654018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.663118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.663487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.663528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.672717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.673145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.673186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.682424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.682797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.682837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.691284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.691727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.700741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.701229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.710165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.710611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.710664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.719774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.720177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.730275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.730698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.730739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.739661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.740153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.740194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.749319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.749717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.749758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.758545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.758978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.759019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.767894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.768371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.776369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.776740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.776794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.784279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.784589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.784629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.792891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.793270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.793311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.801317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.801635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.801685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.319 [2024-07-13 05:25:26.809613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.319 [2024-07-13 05:25:26.809969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.319 [2024-07-13 05:25:26.810011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.818306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.818624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.818665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.826851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.827207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.827247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.836130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.836451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.836494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.844294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.844616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.844656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.853168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.853577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.853617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.861728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.862055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.862096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.870111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.870484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.870539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.878269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.878595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.887348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.887731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.887786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.896077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.896438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.904426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.904776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.904834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.912254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.912612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.912668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.920683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.921060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.921101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.929749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.581 [2024-07-13 05:25:26.930090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.581 [2024-07-13 05:25:26.930131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.581 [2024-07-13 05:25:26.939204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.939661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.939706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.948627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.949023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.949071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.958427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.958856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.958944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.968266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.968622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.968661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.977761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.978142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.978183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.986679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.987089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.987130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:26.996494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:26.996898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:26.996948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.006221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.006647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.015053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.015381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.015421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.024768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.025099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.025148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.033372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.033782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.033829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.041637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.042055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.042095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.049733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.050069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.050108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.057697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.058053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.058093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.066020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.066352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.066393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.582 [2024-07-13 05:25:27.075515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.582 [2024-07-13 05:25:27.075885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.582 [2024-07-13 05:25:27.075928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.083789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.084157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.091993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.092333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.092374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.100139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.100508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.100548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.108330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.108647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.108688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.116627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.116949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.116989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.124924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.125270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.125323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.133052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.133367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.133407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.141054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.141427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.141483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.149390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.149737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.149777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.158220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.158533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.158574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.166838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.167160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.175497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.175845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.175892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.184035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.184396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.192328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.192649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.192689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.200333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.200650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.200691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.208702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.209026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.209066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.217147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.217499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.225282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.225611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.225652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.233836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.234156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.234197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.242218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.242532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.242573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.250178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.250563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.250619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.258876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.259259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.259298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.268771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.269096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.269146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.276909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.277218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.277258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.285480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.841 [2024-07-13 05:25:27.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.841 [2024-07-13 05:25:27.285988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.841 [2024-07-13 05:25:27.294821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.842 [2024-07-13 05:25:27.295262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.842 [2024-07-13 05:25:27.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.842 [2024-07-13 05:25:27.304098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.842 [2024-07-13 05:25:27.304547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.842 [2024-07-13 05:25:27.304602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.842 [2024-07-13 05:25:27.314063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.842 [2024-07-13 05:25:27.314448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.842 [2024-07-13 05:25:27.314502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.842 [2024-07-13 05:25:27.323614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.842 [2024-07-13 05:25:27.324066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.842 [2024-07-13 05:25:27.324107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.842 [2024-07-13 05:25:27.333559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:20.842 [2024-07-13 05:25:27.333975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.842 [2024-07-13 05:25:27.334016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.100 [2024-07-13 05:25:27.343195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:21.100 [2024-07-13 05:25:27.343670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.100 [2024-07-13 05:25:27.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:21.100 [2024-07-13 05:25:27.352427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:21.100 [2024-07-13 05:25:27.352743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.100 [2024-07-13 05:25:27.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:21.100 [2024-07-13 05:25:27.361886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:21.100 [2024-07-13 05:25:27.362345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.100 [2024-07-13 05:25:27.362401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:21.100 [2024-07-13 05:25:27.371715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:21.100 [2024-07-13 05:25:27.372127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.100 [2024-07-13 05:25:27.372183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.100 [2024-07-13 05:25:27.381209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:21.100 [2024-07-13 05:25:27.381617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.100 [2024-07-13 05:25:27.381656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:21.100 00:36:21.100 Latency(us) 00:36:21.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.100 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:21.100 nvme0n1 : 2.00 3371.34 421.42 0.00 0.00 4733.01 3616.62 14369.37 00:36:21.100 =================================================================================================================== 00:36:21.100 Total : 3371.34 421.42 0.00 0.00 4733.01 3616.62 14369.37 00:36:21.100 0 00:36:21.100 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:21.100 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:21.100 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:21.100 | .driver_specific 00:36:21.100 | .nvme_error 00:36:21.100 | .status_code 00:36:21.100 | .command_transient_transport_error' 00:36:21.100 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 861861 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 861861 ']' 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 861861 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861861 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861861' 00:36:21.358 killing process with pid 861861 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 861861 00:36:21.358 Received shutdown signal, test time was about 2.000000 seconds 00:36:21.358 00:36:21.358 Latency(us) 00:36:21.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.358 =================================================================================================================== 00:36:21.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.358 05:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 861861 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 859837 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 859837 ']' 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 859837 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.291 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859837 00:36:22.575 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:22.575 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:22.575 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859837' 00:36:22.575 killing process with pid 859837 00:36:22.575 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 859837 00:36:22.575 05:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 859837 00:36:23.950 00:36:23.950 real 0m23.647s 00:36:23.950 user 0m45.826s 00:36:23.950 sys 0m4.539s 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.950 ************************************ 00:36:23.950 END TEST nvmf_digest_error 00:36:23.950 ************************************ 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:23.950 rmmod nvme_tcp 00:36:23.950 rmmod nvme_fabrics 00:36:23.950 rmmod nvme_keyring 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 859837 ']' 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 859837 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 859837 ']' 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 859837 00:36:23.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (859837) - No such process 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 859837 is not found' 00:36:23.950 Process with pid 859837 is not found 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:23.950 05:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.857 05:25:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:25.857 00:36:25.857 real 0m52.842s 00:36:25.857 user 1m35.355s 00:36:25.857 sys 0m10.461s 00:36:25.857 05:25:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:25.857 05:25:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.857 ************************************ 00:36:25.857 END TEST nvmf_digest 00:36:25.857 ************************************ 00:36:25.857 05:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:25.857 05:25:32 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:25.857 05:25:32 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:25.857 05:25:32 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:25.857 05:25:32 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:25.857 05:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:25.857 05:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:25.857 05:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.857 ************************************ 00:36:25.857 START TEST nvmf_bdevperf 00:36:25.857 ************************************ 00:36:25.857 05:25:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:25.857 * Looking for test storage... 00:36:26.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:26.116 05:25:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:28.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:28.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:28.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:28.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:28.021 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:28.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:36:28.022 00:36:28.022 --- 10.0.0.2 ping statistics --- 00:36:28.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.022 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:36:28.022 00:36:28.022 --- 10.0.0.1 ping statistics --- 00:36:28.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.022 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=864475 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 864475 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 864475 ']' 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.022 05:25:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.022 [2024-07-13 05:25:34.501705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:28.022 [2024-07-13 05:25:34.501846] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.280 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.280 [2024-07-13 05:25:34.630785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:28.538 [2024-07-13 05:25:34.860828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.538 [2024-07-13 05:25:34.860937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.538 [2024-07-13 05:25:34.860983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.538 [2024-07-13 05:25:34.861006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.538 [2024-07-13 05:25:34.861025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.538 [2024-07-13 05:25:34.861129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.538 [2024-07-13 05:25:34.861166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.538 [2024-07-13 05:25:34.861175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 [2024-07-13 05:25:35.467241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 Malloc0 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.106 [2024-07-13 05:25:35.582085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:29.106 { 00:36:29.106 "params": { 00:36:29.106 "name": "Nvme$subsystem", 00:36:29.106 "trtype": "$TEST_TRANSPORT", 00:36:29.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.106 "adrfam": "ipv4", 00:36:29.106 "trsvcid": "$NVMF_PORT", 00:36:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.106 "hdgst": ${hdgst:-false}, 00:36:29.106 "ddgst": ${ddgst:-false} 00:36:29.106 }, 00:36:29.106 "method": "bdev_nvme_attach_controller" 00:36:29.106 } 00:36:29.106 EOF 00:36:29.106 )") 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:29.106 05:25:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:29.106 "params": { 00:36:29.106 "name": "Nvme1", 00:36:29.107 "trtype": "tcp", 00:36:29.107 "traddr": "10.0.0.2", 00:36:29.107 "adrfam": "ipv4", 00:36:29.107 "trsvcid": "4420", 00:36:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.107 "hdgst": false, 00:36:29.107 "ddgst": false 00:36:29.107 }, 00:36:29.107 "method": "bdev_nvme_attach_controller" 00:36:29.107 }' 00:36:29.381 [2024-07-13 05:25:35.665030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:29.381 [2024-07-13 05:25:35.665169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864627 ] 00:36:29.381 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.381 [2024-07-13 05:25:35.787230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.639 [2024-07-13 05:25:36.023139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.205 Running I/O for 1 seconds... 00:36:31.139 00:36:31.139 Latency(us) 00:36:31.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:31.139 Verification LBA range: start 0x0 length 0x4000 00:36:31.139 Nvme1n1 : 1.00 6145.79 24.01 0.00 0.00 20737.34 1662.67 16990.81 00:36:31.139 =================================================================================================================== 00:36:31.139 Total : 6145.79 24.01 0.00 0.00 20737.34 1662.67 16990.81 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=865020 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:32.076 { 00:36:32.076 "params": { 00:36:32.076 "name": "Nvme$subsystem", 00:36:32.076 "trtype": "$TEST_TRANSPORT", 00:36:32.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.076 "adrfam": "ipv4", 00:36:32.076 "trsvcid": "$NVMF_PORT", 00:36:32.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.076 "hdgst": ${hdgst:-false}, 00:36:32.076 "ddgst": ${ddgst:-false} 00:36:32.076 }, 00:36:32.076 "method": "bdev_nvme_attach_controller" 00:36:32.076 } 00:36:32.076 EOF 00:36:32.076 )") 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:32.076 05:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:32.076 "params": { 00:36:32.076 "name": "Nvme1", 00:36:32.076 "trtype": "tcp", 00:36:32.076 "traddr": "10.0.0.2", 00:36:32.076 "adrfam": "ipv4", 00:36:32.076 "trsvcid": "4420", 00:36:32.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.076 "hdgst": false, 00:36:32.076 "ddgst": false 00:36:32.076 }, 00:36:32.076 "method": "bdev_nvme_attach_controller" 00:36:32.076 }' 00:36:32.076 [2024-07-13 05:25:38.565618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:32.076 [2024-07-13 05:25:38.565762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865020 ] 00:36:32.334 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.334 [2024-07-13 05:25:38.689420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.592 [2024-07-13 05:25:38.921065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.158 Running I/O for 15 seconds... 00:36:35.061 05:25:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 864475 00:36:35.061 05:25:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:35.061 [2024-07-13 05:25:41.515004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.061 [2024-07-13 05:25:41.515848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.061 [2024-07-13 05:25:41.515883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.515923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.515949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.515970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.515994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.516955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.516990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.517988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.518011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.062 [2024-07-13 05:25:41.518032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.062 [2024-07-13 05:25:41.518054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.518964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.518989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.519969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.519993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.520014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.520037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:35.063 [2024-07-13 05:25:41.520058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.520081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:35.063 [2024-07-13 05:25:41.520102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.520124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.520161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.520188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.520212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.063 [2024-07-13 05:25:41.520238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.063 [2024-07-13 05:25:41.520267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.520991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.064 [2024-07-13 05:25:41.521640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.521665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:36:35.064 [2024-07-13 05:25:41.521697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:35.064 [2024-07-13 05:25:41.521718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:35.064 [2024-07-13 05:25:41.521739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:36:35.064 [2024-07-13 05:25:41.521762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.522085] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:36:35.064 [2024-07-13 05:25:41.522210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:35.064 [2024-07-13 05:25:41.522249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.522276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:35.064 [2024-07-13 05:25:41.522299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.522321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:35.064 [2024-07-13 05:25:41.522344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.522367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:35.064 [2024-07-13 05:25:41.522389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.064 [2024-07-13 05:25:41.522409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.064 [2024-07-13 05:25:41.526700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.064 [2024-07-13 05:25:41.526778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.064 [2024-07-13 05:25:41.527642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.064 [2024-07-13 05:25:41.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.064 [2024-07-13 05:25:41.527718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.064 [2024-07-13 05:25:41.528034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.064 [2024-07-13 05:25:41.528342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.064 [2024-07-13 05:25:41.528376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.064 [2024-07-13 05:25:41.528411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.064 [2024-07-13 05:25:41.532576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.064 [2024-07-13 05:25:41.541746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.064 [2024-07-13 05:25:41.542238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.064 [2024-07-13 05:25:41.542281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.064 [2024-07-13 05:25:41.542308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.064 [2024-07-13 05:25:41.542598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.064 [2024-07-13 05:25:41.542910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.064 [2024-07-13 05:25:41.542942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.064 [2024-07-13 05:25:41.542965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.064 [2024-07-13 05:25:41.547193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.065 [2024-07-13 05:25:41.556589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.065 [2024-07-13 05:25:41.557122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.065 [2024-07-13 05:25:41.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.065 [2024-07-13 05:25:41.557185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.065 [2024-07-13 05:25:41.557495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.065 [2024-07-13 05:25:41.557807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.065 [2024-07-13 05:25:41.557840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.065 [2024-07-13 05:25:41.557883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.324 [2024-07-13 05:25:41.562220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.324 [2024-07-13 05:25:41.571374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.324 [2024-07-13 05:25:41.571834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.324 [2024-07-13 05:25:41.571890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.324 [2024-07-13 05:25:41.571918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.324 [2024-07-13 05:25:41.572222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.324 [2024-07-13 05:25:41.572517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.324 [2024-07-13 05:25:41.572549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.324 [2024-07-13 05:25:41.572572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.324 [2024-07-13 05:25:41.576820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.324 [2024-07-13 05:25:41.585962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.324 [2024-07-13 05:25:41.586453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.324 [2024-07-13 05:25:41.586495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.324 [2024-07-13 05:25:41.586521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.324 [2024-07-13 05:25:41.586811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.324 [2024-07-13 05:25:41.587117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.324 [2024-07-13 05:25:41.587150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.324 [2024-07-13 05:25:41.587180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.324 [2024-07-13 05:25:41.591369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.324 [2024-07-13 05:25:41.600419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.324 [2024-07-13 05:25:41.600899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.324 [2024-07-13 05:25:41.600941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.324 [2024-07-13 05:25:41.600968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.324 [2024-07-13 05:25:41.601262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.324 [2024-07-13 05:25:41.601553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.324 [2024-07-13 05:25:41.601584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.324 [2024-07-13 05:25:41.601606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.324 [2024-07-13 05:25:41.605772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.324 [2024-07-13 05:25:41.615102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.324 [2024-07-13 05:25:41.615555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.324 [2024-07-13 05:25:41.615595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.324 [2024-07-13 05:25:41.615620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.324 [2024-07-13 05:25:41.615914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.324 [2024-07-13 05:25:41.616198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.324 [2024-07-13 05:25:41.616229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.324 [2024-07-13 05:25:41.616250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.324 [2024-07-13 05:25:41.620266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.324 [2024-07-13 05:25:41.629189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.324 [2024-07-13 05:25:41.629609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.324 [2024-07-13 05:25:41.629645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.629669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.629965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.630234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.630261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.630280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.633828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.643817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.644342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.644403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.644706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.645014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.645042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.645077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.649263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.658465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.658972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.659009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.659033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.659342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.659634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.659665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.659687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.663919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.673068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.673564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.673615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.673640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.673972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.674269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.674301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.674329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.678614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.687618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.688074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.688115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.688141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.688433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.688724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.688756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.688779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.692962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.702104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.702665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.702724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.702749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.703050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.703356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.703388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.703411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.707602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.716779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.717262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.717312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.717338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.717626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.717928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.717960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.717983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.722216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.731318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.731779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.731826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.731851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.732156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.732454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.732486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.732508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.736703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.745822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.746308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.746349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.746390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.746681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.746984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.747015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.747038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.751230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.760377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.760895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.760937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.760963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.761250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.761539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.761578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.761600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.765764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.774882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.325 [2024-07-13 05:25:41.775343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.325 [2024-07-13 05:25:41.775392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.325 [2024-07-13 05:25:41.775419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.325 [2024-07-13 05:25:41.775714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.325 [2024-07-13 05:25:41.776015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.325 [2024-07-13 05:25:41.776047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.325 [2024-07-13 05:25:41.776070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.325 [2024-07-13 05:25:41.780268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.325 [2024-07-13 05:25:41.789345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.326 [2024-07-13 05:25:41.789794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.326 [2024-07-13 05:25:41.789835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.326 [2024-07-13 05:25:41.789879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.326 [2024-07-13 05:25:41.790169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.326 [2024-07-13 05:25:41.790459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.326 [2024-07-13 05:25:41.790490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.326 [2024-07-13 05:25:41.790513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.326 [2024-07-13 05:25:41.794677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.326 [2024-07-13 05:25:41.804051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.326 [2024-07-13 05:25:41.804520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.326 [2024-07-13 05:25:41.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.326 [2024-07-13 05:25:41.804593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.326 [2024-07-13 05:25:41.804892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.326 [2024-07-13 05:25:41.805184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.326 [2024-07-13 05:25:41.805215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.326 [2024-07-13 05:25:41.805237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.326 [2024-07-13 05:25:41.809402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.326 [2024-07-13 05:25:41.818747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.326 [2024-07-13 05:25:41.819219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.326 [2024-07-13 05:25:41.819280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.326 [2024-07-13 05:25:41.819304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.326 [2024-07-13 05:25:41.819616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.326 [2024-07-13 05:25:41.819919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.326 [2024-07-13 05:25:41.819956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.326 [2024-07-13 05:25:41.819980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.824289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.833421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.833939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.833982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.834009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.834298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.834588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.834620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.834643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.838808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.848111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.848574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.848614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.848641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.848939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.849228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.849259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.849282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.853440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.862689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.863178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.863218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.863244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.863530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.863818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.863850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.863883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.868026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.877311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.877807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.877848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.877886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.878176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.878464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.878495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.878517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.882656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.891933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.892428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.892469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.892495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.892782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.893084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.893116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.893138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.897269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.906531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.907033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.907073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.907099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.907385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.907675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.907706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.907729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.911861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.921119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.585 [2024-07-13 05:25:41.921610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.585 [2024-07-13 05:25:41.921644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.585 [2024-07-13 05:25:41.921666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.585 [2024-07-13 05:25:41.921966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.585 [2024-07-13 05:25:41.922256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.585 [2024-07-13 05:25:41.922288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.585 [2024-07-13 05:25:41.922310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.585 [2024-07-13 05:25:41.926443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.585 [2024-07-13 05:25:41.935678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:41.936160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:41.936201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:41.936226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:41.936512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:41.936800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:41.936831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:41.936853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:41.941007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:41.950237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:41.950731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:41.950771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:41.950797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:41.951097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:41.951399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:41.951430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:41.951453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:41.955573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:41.964812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:41.965298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:41.965339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:41.965365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:41.965653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:41.965952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:41.965990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:41.966013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:41.970161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:41.979404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:41.979900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:41.979941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:41.979967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:41.980254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:41.980541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:41.980572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:41.980595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:41.984728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:41.993974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:41.994482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:41.994533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:41.994557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:41.994860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:41.995166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:41.995197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:41.995227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:41.999362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.008594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.009099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.009139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.009175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.009461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.009750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.009781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.009804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:42.013939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.023192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.023680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.023721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.023757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.024057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.024346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.024378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.024400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:42.028546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.037778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.038289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.038331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.038357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.038643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.038944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.038976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.038999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:42.043120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.052351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.052811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.052858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.052896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.053184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.053474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.053505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.053527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:42.057652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.066886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.067397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.067437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.067469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.067758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.068060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.068092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.068114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.586 [2024-07-13 05:25:42.072243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.586 [2024-07-13 05:25:42.081592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.586 [2024-07-13 05:25:42.082045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.586 [2024-07-13 05:25:42.082092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.586 [2024-07-13 05:25:42.082131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.586 [2024-07-13 05:25:42.082419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.586 [2024-07-13 05:25:42.082709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.586 [2024-07-13 05:25:42.082740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.586 [2024-07-13 05:25:42.082763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.086980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.096067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.846 [2024-07-13 05:25:42.096566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.846 [2024-07-13 05:25:42.096619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.846 [2024-07-13 05:25:42.096644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.846 [2024-07-13 05:25:42.096964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.846 [2024-07-13 05:25:42.097252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.846 [2024-07-13 05:25:42.097284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.846 [2024-07-13 05:25:42.097307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.101432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.110670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.846 [2024-07-13 05:25:42.111194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.846 [2024-07-13 05:25:42.111245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.846 [2024-07-13 05:25:42.111269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.846 [2024-07-13 05:25:42.111573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.846 [2024-07-13 05:25:42.111863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.846 [2024-07-13 05:25:42.111919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.846 [2024-07-13 05:25:42.111943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.116078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.125318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.846 [2024-07-13 05:25:42.125805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.846 [2024-07-13 05:25:42.125845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.846 [2024-07-13 05:25:42.125881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.846 [2024-07-13 05:25:42.126170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.846 [2024-07-13 05:25:42.126458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.846 [2024-07-13 05:25:42.126490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.846 [2024-07-13 05:25:42.126512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.130644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.139887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.846 [2024-07-13 05:25:42.140368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.846 [2024-07-13 05:25:42.140410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.846 [2024-07-13 05:25:42.140436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.846 [2024-07-13 05:25:42.140722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.846 [2024-07-13 05:25:42.141024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.846 [2024-07-13 05:25:42.141056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.846 [2024-07-13 05:25:42.141079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.145205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.154459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.846 [2024-07-13 05:25:42.154942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.846 [2024-07-13 05:25:42.154983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.846 [2024-07-13 05:25:42.155008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.846 [2024-07-13 05:25:42.155293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.846 [2024-07-13 05:25:42.155580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.846 [2024-07-13 05:25:42.155612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.846 [2024-07-13 05:25:42.155649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.846 [2024-07-13 05:25:42.159791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.846 [2024-07-13 05:25:42.169062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.169548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.169588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.169614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.169913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.170201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.170232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.170255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.174385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.183614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.184128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.184177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.184201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.184505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.184795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.184826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.184848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.188991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.198249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.198737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.198778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.198804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.199102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.199396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.199428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.199451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.203575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.212825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.213331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.213404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.213692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.213996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.214028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.214051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.218179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.227409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.227890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.227946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.227970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.228263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.228553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.228585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.228608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.232738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.241993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.242489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.242529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.242554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.242841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.243136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.243168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.243191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.247333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.256590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.257092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.257132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.257158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.257443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.257731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.257771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.257794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.261937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.271194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.271686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.271725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.271751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.272050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.272337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.272368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.272391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.276508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.285760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.286242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.286283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.286308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.286595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.286896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.286929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.286951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.291092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.300337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.300831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.300879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.300907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.301194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.301483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.301515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.301538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.847 [2024-07-13 05:25:42.305668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.847 [2024-07-13 05:25:42.314933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.847 [2024-07-13 05:25:42.315414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.847 [2024-07-13 05:25:42.315455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.847 [2024-07-13 05:25:42.315481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.847 [2024-07-13 05:25:42.315769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.847 [2024-07-13 05:25:42.316069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.847 [2024-07-13 05:25:42.316101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.847 [2024-07-13 05:25:42.316123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.848 [2024-07-13 05:25:42.320249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.848 [2024-07-13 05:25:42.329496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.848 [2024-07-13 05:25:42.329979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.848 [2024-07-13 05:25:42.330020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:35.848 [2024-07-13 05:25:42.330046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:35.848 [2024-07-13 05:25:42.330333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:35.848 [2024-07-13 05:25:42.330620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.848 [2024-07-13 05:25:42.330651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.848 [2024-07-13 05:25:42.330674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.848 [2024-07-13 05:25:42.334801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.848 [2024-07-13 05:25:42.344232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.344764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.344807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.107 [2024-07-13 05:25:42.344834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.107 [2024-07-13 05:25:42.345133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.107 [2024-07-13 05:25:42.345436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.107 [2024-07-13 05:25:42.345470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.107 [2024-07-13 05:25:42.345492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.107 [2024-07-13 05:25:42.349719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.107 [2024-07-13 05:25:42.358737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.359244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.359284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.107 [2024-07-13 05:25:42.359317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.107 [2024-07-13 05:25:42.359606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.107 [2024-07-13 05:25:42.359906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.107 [2024-07-13 05:25:42.359937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.107 [2024-07-13 05:25:42.359960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.107 [2024-07-13 05:25:42.364088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.107 [2024-07-13 05:25:42.373343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.373910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.373952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.107 [2024-07-13 05:25:42.373978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.107 [2024-07-13 05:25:42.374266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.107 [2024-07-13 05:25:42.374555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.107 [2024-07-13 05:25:42.374585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.107 [2024-07-13 05:25:42.374607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.107 [2024-07-13 05:25:42.378731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.107 [2024-07-13 05:25:42.387975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.388430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.388470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.107 [2024-07-13 05:25:42.388495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.107 [2024-07-13 05:25:42.388781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.107 [2024-07-13 05:25:42.389081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.107 [2024-07-13 05:25:42.389113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.107 [2024-07-13 05:25:42.389135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.107 [2024-07-13 05:25:42.393258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.107 [2024-07-13 05:25:42.402492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.402955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.402996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.107 [2024-07-13 05:25:42.403022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.107 [2024-07-13 05:25:42.403309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.107 [2024-07-13 05:25:42.403604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.107 [2024-07-13 05:25:42.403636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.107 [2024-07-13 05:25:42.403658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.107 [2024-07-13 05:25:42.407783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.107 [2024-07-13 05:25:42.417034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.107 [2024-07-13 05:25:42.417527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.107 [2024-07-13 05:25:42.417567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.417593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.417890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.418179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.418210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.418233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.422372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.431590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.432083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.432123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.432149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.432435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.432723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.432754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.432776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.436906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.446158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.446641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.446681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.446708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.447008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.447296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.447327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.447349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.451475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.460702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.461213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.461254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.461280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.461567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.461855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.461897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.461920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.466055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.475307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.475798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.475839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.475875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.476166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.476456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.476487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.476509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.480644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.489879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.490369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.490410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.490435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.490721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.491022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.491053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.491076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.495199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.504441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.504933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.504973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.505005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.505294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.505583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.505614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.505637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.509768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.519023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.519490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.519532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.519559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.519847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.520155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.520188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.520210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.524343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.533573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.534100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.534142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.534168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.534453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.534742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.534773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.534795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.538942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.548195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.548791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.548861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.548899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.549188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.549482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.549514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.549536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.553664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.562670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.563173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.108 [2024-07-13 05:25:42.563213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.108 [2024-07-13 05:25:42.563239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.108 [2024-07-13 05:25:42.563525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.108 [2024-07-13 05:25:42.563812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.108 [2024-07-13 05:25:42.563843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.108 [2024-07-13 05:25:42.563874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.108 [2024-07-13 05:25:42.568013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.108 [2024-07-13 05:25:42.577263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.108 [2024-07-13 05:25:42.577765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.109 [2024-07-13 05:25:42.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.109 [2024-07-13 05:25:42.577879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.109 [2024-07-13 05:25:42.578170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.109 [2024-07-13 05:25:42.578460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.109 [2024-07-13 05:25:42.578491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.109 [2024-07-13 05:25:42.578513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.109 [2024-07-13 05:25:42.582643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.109 [2024-07-13 05:25:42.591888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.109 [2024-07-13 05:25:42.592330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.109 [2024-07-13 05:25:42.592371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.109 [2024-07-13 05:25:42.592397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.109 [2024-07-13 05:25:42.592685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.109 [2024-07-13 05:25:42.592989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.109 [2024-07-13 05:25:42.593021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.109 [2024-07-13 05:25:42.593043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.109 [2024-07-13 05:25:42.597180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-07-13 05:25:42.606654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-07-13 05:25:42.607183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-07-13 05:25:42.607225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-07-13 05:25:42.607252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.368 [2024-07-13 05:25:42.607538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.368 [2024-07-13 05:25:42.607826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-07-13 05:25:42.607859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-07-13 05:25:42.607895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-07-13 05:25:42.612143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-07-13 05:25:42.621141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-07-13 05:25:42.621626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-07-13 05:25:42.621667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-07-13 05:25:42.621694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.368 [2024-07-13 05:25:42.621993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.368 [2024-07-13 05:25:42.622282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-07-13 05:25:42.622313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-07-13 05:25:42.622335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-07-13 05:25:42.626461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-07-13 05:25:42.635695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-07-13 05:25:42.636188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-07-13 05:25:42.636229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-07-13 05:25:42.636255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.368 [2024-07-13 05:25:42.636541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.368 [2024-07-13 05:25:42.636837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-07-13 05:25:42.636877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-07-13 05:25:42.636903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-07-13 05:25:42.641036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.650289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.650897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.650937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.650969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.651257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.651545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.651576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.651598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.655719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.664724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.665235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.665302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.665587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.665887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.665919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.665941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.670080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.679324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.679913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.679953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.679979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.680266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.680554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.680585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.680608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.684886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.693876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.694381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.694422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.694449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.694735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.695041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.695072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.695095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.699221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.708452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.708928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.708969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.708996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.709282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.709571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.709602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.709624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.713747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.722978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.723453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.723494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.723520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.723805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.724103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.724134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.724156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.728280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.737501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.737995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.738035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.738062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.738347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.738636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.738667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.738689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.742832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.752079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.752557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.752596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.752622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.752918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.753205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.753236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.753259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.757399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.766661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.767177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.767218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.767245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.767530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.767818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.767849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.767882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.772018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.781283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.781765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.781805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.781831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.782143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.782433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.782464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.782486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-07-13 05:25:42.786612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-07-13 05:25:42.795855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-07-13 05:25:42.796472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-07-13 05:25:42.796534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-07-13 05:25:42.796562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.369 [2024-07-13 05:25:42.796849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.369 [2024-07-13 05:25:42.797150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-07-13 05:25:42.797182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-07-13 05:25:42.797204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.370 [2024-07-13 05:25:42.801349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.370 [2024-07-13 05:25:42.810418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.370 [2024-07-13 05:25:42.810892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.370 [2024-07-13 05:25:42.810936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.370 [2024-07-13 05:25:42.810962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.370 [2024-07-13 05:25:42.811250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.370 [2024-07-13 05:25:42.811538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.370 [2024-07-13 05:25:42.811570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.370 [2024-07-13 05:25:42.811592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.370 [2024-07-13 05:25:42.815755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.370 [2024-07-13 05:25:42.824978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.370 [2024-07-13 05:25:42.825510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.370 [2024-07-13 05:25:42.825567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.370 [2024-07-13 05:25:42.825594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.370 [2024-07-13 05:25:42.825917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.370 [2024-07-13 05:25:42.826215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.370 [2024-07-13 05:25:42.826247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.370 [2024-07-13 05:25:42.826269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.370 [2024-07-13 05:25:42.830499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.370 [2024-07-13 05:25:42.839634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.370 [2024-07-13 05:25:42.840125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.370 [2024-07-13 05:25:42.840166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.370 [2024-07-13 05:25:42.840193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.370 [2024-07-13 05:25:42.840494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.370 [2024-07-13 05:25:42.840798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.370 [2024-07-13 05:25:42.840830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.370 [2024-07-13 05:25:42.840852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.370 [2024-07-13 05:25:42.845113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.370 [2024-07-13 05:25:42.854328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.370 [2024-07-13 05:25:42.854797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.370 [2024-07-13 05:25:42.854837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.370 [2024-07-13 05:25:42.854863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.370 [2024-07-13 05:25:42.855163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.370 [2024-07-13 05:25:42.855454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.370 [2024-07-13 05:25:42.855496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.370 [2024-07-13 05:25:42.855518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.370 [2024-07-13 05:25:42.859747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.869065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.869563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.869604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.869630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.869932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.870221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.870253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.870276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.874518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.883608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.884086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.884154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.884441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.884731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.884762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.884784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.888956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.898315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.898814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.898856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.898893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.899185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.899476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.899508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.899530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.903698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.912782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.913345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.913403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.913429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.913715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.914016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.914048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.914070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.918221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.927288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.927933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.927974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.928000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.928287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.928577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.928608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.928630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.932794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.941828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.942302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.942347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.942374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.942661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.942974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.943016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.943038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.947180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.956512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.957026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.957071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.957097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.957386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.957676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.957707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.957730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.961887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.971179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.971738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.971799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.971826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.972126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.972416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.972447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.972469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.976653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:42.985684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:42.986183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:42.986224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:42.986250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:42.986537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:42.986832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:42.986886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:42.986911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:42.991075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.000123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.000619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.000659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.000685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.000986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.001277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.001309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.001332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.005472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.014735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.015221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.015262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.015289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.015575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.015863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.015905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.015928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.020087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.029369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.029855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.029902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.029928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.030216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.030506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.030537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.030567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.034718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.044025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.044521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.044561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.044587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.044886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.045176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.045207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.045229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.049381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.058691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.059198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.059238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.059264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.059548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.059839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.059879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.059904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.064063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.073311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.073829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.073879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.073907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.074195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.074483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.074514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.074537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.078682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.087724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.088194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.088241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.088269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.088557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.088846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.088890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.088913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.628 [2024-07-13 05:25:43.093060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.628 [2024-07-13 05:25:43.102323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.628 [2024-07-13 05:25:43.102810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.628 [2024-07-13 05:25:43.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.628 [2024-07-13 05:25:43.102888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.628 [2024-07-13 05:25:43.103180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.628 [2024-07-13 05:25:43.103477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.628 [2024-07-13 05:25:43.103508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.628 [2024-07-13 05:25:43.103529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.629 [2024-07-13 05:25:43.107663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.629 [2024-07-13 05:25:43.116942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.629 [2024-07-13 05:25:43.117432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.629 [2024-07-13 05:25:43.117473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.629 [2024-07-13 05:25:43.117499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.629 [2024-07-13 05:25:43.117786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.629 [2024-07-13 05:25:43.118086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.629 [2024-07-13 05:25:43.118118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.629 [2024-07-13 05:25:43.118140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.629 [2024-07-13 05:25:43.122272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.131595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.132079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.132120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.132147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.132441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.132730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.132761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.132784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.136939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.146232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.146717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.146758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.146784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.147081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.147371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.147402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.147425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.151565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.160845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.161349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.161389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.161415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.161702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.162001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.162033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.162056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.166203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.175540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.176032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.176074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.176100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.176389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.176680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.176712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.176740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.180917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.189985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.190462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.190502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.190529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.190815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.191116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.191148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.191171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.195333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.204602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.205098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.205165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.205459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.205749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.205780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.205810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.209965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.219263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.219749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.219789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.219815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.220115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.220406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.887 [2024-07-13 05:25:43.220437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.887 [2024-07-13 05:25:43.220460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.887 [2024-07-13 05:25:43.224608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.887 [2024-07-13 05:25:43.233975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.887 [2024-07-13 05:25:43.234490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.887 [2024-07-13 05:25:43.234530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.887 [2024-07-13 05:25:43.234556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.887 [2024-07-13 05:25:43.234843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.887 [2024-07-13 05:25:43.235144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.235176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.235198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.239347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.248670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.249174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.249214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.249240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.249528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.249817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.249849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.249880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.254078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.263177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.263662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.263702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.263728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.264028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.264319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.264351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.264374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.268529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.277818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.278312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.278353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.278380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.278674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.278978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.279009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.279032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.283201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.292520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.293039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.293079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.293106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.293394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.293686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.293716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.293739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.297939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.307049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.307524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.307564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.307590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.307889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.308181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.308213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.308235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.312415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.321546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.322039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.322080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.322107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.322395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.322687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.322719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.322748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.326962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.336115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.336608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.336647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.336674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.336978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.337270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.337302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.337324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.341490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.350569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.351050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.351090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.351117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.351406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.351698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.351730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.351752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.355922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.365252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.365725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.365764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.365790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.366090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.366382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.366413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.366436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.370602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.888 [2024-07-13 05:25:43.379920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.888 [2024-07-13 05:25:43.380418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.888 [2024-07-13 05:25:43.380458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:36.888 [2024-07-13 05:25:43.380485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:36.888 [2024-07-13 05:25:43.380773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:36.888 [2024-07-13 05:25:43.381074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.888 [2024-07-13 05:25:43.381106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.888 [2024-07-13 05:25:43.381128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.888 [2024-07-13 05:25:43.385469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.147 [2024-07-13 05:25:43.394446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.147 [2024-07-13 05:25:43.394964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.147 [2024-07-13 05:25:43.395006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.147 [2024-07-13 05:25:43.395033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.147 [2024-07-13 05:25:43.395322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.147 [2024-07-13 05:25:43.395614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.147 [2024-07-13 05:25:43.395645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.147 [2024-07-13 05:25:43.395668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.147 [2024-07-13 05:25:43.399848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.147 [2024-07-13 05:25:43.408963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.147 [2024-07-13 05:25:43.409444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.147 [2024-07-13 05:25:43.409485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.147 [2024-07-13 05:25:43.409511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.147 [2024-07-13 05:25:43.409800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.147 [2024-07-13 05:25:43.410102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.147 [2024-07-13 05:25:43.410134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.147 [2024-07-13 05:25:43.410157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.147 [2024-07-13 05:25:43.414346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.147 [2024-07-13 05:25:43.423439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.147 [2024-07-13 05:25:43.423951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.147 [2024-07-13 05:25:43.423992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.147 [2024-07-13 05:25:43.424019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.147 [2024-07-13 05:25:43.424314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.147 [2024-07-13 05:25:43.424607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.147 [2024-07-13 05:25:43.424638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.147 [2024-07-13 05:25:43.424660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.147 [2024-07-13 05:25:43.428831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.147 [2024-07-13 05:25:43.437929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.147 [2024-07-13 05:25:43.438420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.147 [2024-07-13 05:25:43.438461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.438487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.438777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.439081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.439113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.439136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.443320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.452424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.452968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.453010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.453037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.453329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.453621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.453653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.453676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.457845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.466943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.467444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.467485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.467512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.467800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.468102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.468134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.468163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.472326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.481426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.481917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.481958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.481984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.482275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.482566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.482597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.482619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.486794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.495913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.496368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.496409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.496436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.496723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.497028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.497060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.497082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.501248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.510567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.511039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.511079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.511105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.511394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.511685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.511717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.511739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.515932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.525236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.525715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.525755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.525782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.526082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.526373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.526404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.526427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.530580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.539910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.540415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.540455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.540482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.540770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.541073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.541105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.541128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.545553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.554448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.554940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.554981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.555008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.555299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.555593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.555624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.555646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.559820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.568936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.569398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.569445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.569473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.569768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.570071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.570104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.570127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.574306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.583632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.148 [2024-07-13 05:25:43.584109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.148 [2024-07-13 05:25:43.584150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.148 [2024-07-13 05:25:43.584176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.148 [2024-07-13 05:25:43.584465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.148 [2024-07-13 05:25:43.584757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.148 [2024-07-13 05:25:43.584789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.148 [2024-07-13 05:25:43.584812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.148 [2024-07-13 05:25:43.589001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.148 [2024-07-13 05:25:43.598114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.149 [2024-07-13 05:25:43.598617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.149 [2024-07-13 05:25:43.598658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.149 [2024-07-13 05:25:43.598685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.149 [2024-07-13 05:25:43.598986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.149 [2024-07-13 05:25:43.599276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.149 [2024-07-13 05:25:43.599308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.149 [2024-07-13 05:25:43.599330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.149 [2024-07-13 05:25:43.603496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.149 [2024-07-13 05:25:43.612590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.149 [2024-07-13 05:25:43.613087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.149 [2024-07-13 05:25:43.613128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.149 [2024-07-13 05:25:43.613169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.149 [2024-07-13 05:25:43.613458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.149 [2024-07-13 05:25:43.613749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.149 [2024-07-13 05:25:43.613780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.149 [2024-07-13 05:25:43.613811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.149 [2024-07-13 05:25:43.617986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.149 [2024-07-13 05:25:43.627044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.149 [2024-07-13 05:25:43.627539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.149 [2024-07-13 05:25:43.627580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.149 [2024-07-13 05:25:43.627607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.149 [2024-07-13 05:25:43.627908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.149 [2024-07-13 05:25:43.628200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.149 [2024-07-13 05:25:43.628231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.149 [2024-07-13 05:25:43.628254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.149 [2024-07-13 05:25:43.632427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.149 [2024-07-13 05:25:43.641565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.149 [2024-07-13 05:25:43.642063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.149 [2024-07-13 05:25:43.642103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.149 [2024-07-13 05:25:43.642130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.149 [2024-07-13 05:25:43.642421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.149 [2024-07-13 05:25:43.642727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.149 [2024-07-13 05:25:43.642759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.149 [2024-07-13 05:25:43.642783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.647102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.656033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.656542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.656583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.656610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.656911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.657202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.657233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.657256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.661417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.670505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.670996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.671038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.671065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.671353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.671644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.671675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.671698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.675888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.684979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.685465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.685505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.685531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.685820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.686124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.686156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.686178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.690343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.699446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.699982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.700023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.700050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.700338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.700629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.700660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.700683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.704859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.714080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.714537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.714578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.714604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.714911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.715201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.715233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.715255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.719423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.728736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.729231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.729271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.729298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.729587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.729888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.729920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.729942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.734108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.743202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.743714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.743754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.743781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.744083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.744373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.409 [2024-07-13 05:25:43.744405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.409 [2024-07-13 05:25:43.744427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.409 [2024-07-13 05:25:43.748589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.409 [2024-07-13 05:25:43.757728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.409 [2024-07-13 05:25:43.758226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.409 [2024-07-13 05:25:43.758267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.409 [2024-07-13 05:25:43.758294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.409 [2024-07-13 05:25:43.758582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.409 [2024-07-13 05:25:43.758882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.758920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.758943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.763127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.772226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.772742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.772783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.772810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.773111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.773403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.773434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.773457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.777631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.786701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.787184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.787225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.787252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.787541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.787834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.787875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.787901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.792080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.801191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.801649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.801689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.801715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.802016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.802310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.802342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.802364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.806551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.815682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.816198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.816238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.816264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.816555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.816859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.816901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.816923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.821100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.830264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.830749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.830790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.830816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.831112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.831404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.831436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.831458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.835643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.844778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.845257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.845297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.845324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.845613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.845916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.845948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.845972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.850137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.859426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.859929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.859971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.860003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.860291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.860581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.860612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.860635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.864791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.874115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.874603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.874643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.874670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.874972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.875262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.875295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.875317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.879470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.888612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.889096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.889136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.889163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.889449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.889741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.889772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.889795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.410 [2024-07-13 05:25:43.893962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.410 [2024-07-13 05:25:43.903181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.410 [2024-07-13 05:25:43.903669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.410 [2024-07-13 05:25:43.903711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.410 [2024-07-13 05:25:43.903737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.410 [2024-07-13 05:25:43.904063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.410 [2024-07-13 05:25:43.904355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.410 [2024-07-13 05:25:43.904392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.410 [2024-07-13 05:25:43.904416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.908735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.917679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.918164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.918206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.918233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.918521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.918813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.918844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.918875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.923065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.932179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.932684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.932725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.932752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.933054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.933345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.933377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.933399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.937560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.946890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.947375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.947416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.947442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.947731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.948033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.948065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.948089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.952262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.961412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.961898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.961945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.961971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.962270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.962570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.962602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.962627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.966802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.975952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.976424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.976466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.976492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.976787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.977089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.977122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.977145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.981311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:43.990423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:43.990922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:43.990963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:43.990989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:43.991278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.670 [2024-07-13 05:25:43.991579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.670 [2024-07-13 05:25:43.991611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.670 [2024-07-13 05:25:43.991633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.670 [2024-07-13 05:25:43.995818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.670 [2024-07-13 05:25:44.004994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.670 [2024-07-13 05:25:44.005465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.670 [2024-07-13 05:25:44.005506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.670 [2024-07-13 05:25:44.005539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.670 [2024-07-13 05:25:44.005827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.006128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.006170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.006193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.010365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.019564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.020076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.020126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.020152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.020439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.020728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.020760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.020798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.024974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.034093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.034590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.034638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.034666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.034967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.035256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.035288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.035311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.039474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.048590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.049093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.049134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.049160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.049448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.049739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.049777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.049800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.053977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.063073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.063566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.063607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.063633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.063933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.064223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.064255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.064277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.068439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.077548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.078012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.078055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.078082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.078373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.078666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.078697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.078719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.082955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.092174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.092671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.092712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.092738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.093043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.093335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.093367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.093389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.097638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.106815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.107334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.107375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.107401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.107689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.107995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.108028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.108050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.112239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.121335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.121799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.121839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.121871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.122162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.122453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.122484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.122507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.126671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.135987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.136441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.136482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.136508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.136795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.137095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.137127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.137149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.141311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.150632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.151126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.151167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.151199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.671 [2024-07-13 05:25:44.151488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.671 [2024-07-13 05:25:44.151779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.671 [2024-07-13 05:25:44.151811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.671 [2024-07-13 05:25:44.151833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.671 [2024-07-13 05:25:44.156010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.671 [2024-07-13 05:25:44.165368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.671 [2024-07-13 05:25:44.165876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.671 [2024-07-13 05:25:44.165917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.671 [2024-07-13 05:25:44.165959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.672 [2024-07-13 05:25:44.166291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.672 [2024-07-13 05:25:44.166583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.672 [2024-07-13 05:25:44.166614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.672 [2024-07-13 05:25:44.166637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.170901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.180062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.180557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.180599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.180625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.180928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.181220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.181251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.181273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.185437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.194530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.195033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.195074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.195101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.195390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.195680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.195717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.195740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.199945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.209019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.209527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.209568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.209595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.209897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.210190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.210221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.210244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.214405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.223492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.223933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.223975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.224002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.224291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.224579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.224611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.224633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.228815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.238182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.238681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.238726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.238753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.239050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.239339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.239372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.239396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.243571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.252709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.931 [2024-07-13 05:25:44.253214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.931 [2024-07-13 05:25:44.253254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.931 [2024-07-13 05:25:44.253280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.931 [2024-07-13 05:25:44.253566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.931 [2024-07-13 05:25:44.253855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.931 [2024-07-13 05:25:44.253896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.931 [2024-07-13 05:25:44.253930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.931 [2024-07-13 05:25:44.258118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.931 [2024-07-13 05:25:44.267276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.267761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.267801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.267827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.268125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.268418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.268450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.268473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.272636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.281725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.282232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.282274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.282300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.282588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.282891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.282925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.282948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.287172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.296230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.296692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.296733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.296765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.297068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.297359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.297392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.297416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.301585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.310913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.311398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.311439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.311466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.311754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.312059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.312094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.312117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.316288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.325401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.325939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.325981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.326008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.326299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.326591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.326624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.326647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.330826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.339965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.340459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.340500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.340526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.340817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.341125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.341159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.341183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.345355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.354489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.354972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.355014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.355040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.355334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.355629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.355662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.355685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.359857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.368945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.369449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.369490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.369517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.369806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.370109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.370143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.370167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.374332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.383412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.383907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.383948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.383975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.384265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.384555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.384588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.384611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.388784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.398107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.398609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.398651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.398677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.398980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.399270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.399303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.932 [2024-07-13 05:25:44.399326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.932 [2024-07-13 05:25:44.403481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.932 [2024-07-13 05:25:44.412795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.932 [2024-07-13 05:25:44.413287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.932 [2024-07-13 05:25:44.413329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.932 [2024-07-13 05:25:44.413356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.932 [2024-07-13 05:25:44.413647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.932 [2024-07-13 05:25:44.413952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.932 [2024-07-13 05:25:44.413985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.933 [2024-07-13 05:25:44.414008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.933 [2024-07-13 05:25:44.418169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.933 [2024-07-13 05:25:44.427603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.933 [2024-07-13 05:25:44.428117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.933 [2024-07-13 05:25:44.428161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:37.933 [2024-07-13 05:25:44.428189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:37.933 [2024-07-13 05:25:44.428480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:37.933 [2024-07-13 05:25:44.428770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.933 [2024-07-13 05:25:44.428802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.933 [2024-07-13 05:25:44.428826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.192 [2024-07-13 05:25:44.433073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.192 [2024-07-13 05:25:44.442268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.192 [2024-07-13 05:25:44.442785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.192 [2024-07-13 05:25:44.442840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.192 [2024-07-13 05:25:44.442884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.192 [2024-07-13 05:25:44.443178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.192 [2024-07-13 05:25:44.443472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.192 [2024-07-13 05:25:44.443506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.192 [2024-07-13 05:25:44.443530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.192 [2024-07-13 05:25:44.447710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.192 [2024-07-13 05:25:44.456790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.192 [2024-07-13 05:25:44.457285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.192 [2024-07-13 05:25:44.457327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.192 [2024-07-13 05:25:44.457353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.192 [2024-07-13 05:25:44.457645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.192 [2024-07-13 05:25:44.457951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.192 [2024-07-13 05:25:44.457985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.192 [2024-07-13 05:25:44.458008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.192 [2024-07-13 05:25:44.462169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.192 [2024-07-13 05:25:44.471493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.192 [2024-07-13 05:25:44.471990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.192 [2024-07-13 05:25:44.472033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.192 [2024-07-13 05:25:44.472060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.192 [2024-07-13 05:25:44.472349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.192 [2024-07-13 05:25:44.472640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.192 [2024-07-13 05:25:44.472673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.192 [2024-07-13 05:25:44.472696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.192 [2024-07-13 05:25:44.476885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.192 [2024-07-13 05:25:44.485966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.192 [2024-07-13 05:25:44.486450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.192 [2024-07-13 05:25:44.486491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.192 [2024-07-13 05:25:44.486517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.192 [2024-07-13 05:25:44.486807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.192 [2024-07-13 05:25:44.487116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.192 [2024-07-13 05:25:44.487150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.192 [2024-07-13 05:25:44.487174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.192 [2024-07-13 05:25:44.491350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 864475 Killed "${NVMF_APP[@]}" "$@" 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=865686 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 865686 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 865686 ']' 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:38.193 05:25:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.193 [2024-07-13 05:25:44.500443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.500942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.500983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.501012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.501305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.501598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.501629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.501653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.505833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.514917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.515418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.515459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.515486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.515776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.516084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.516117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.516140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.520290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.529595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.530088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.530129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.530157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.530444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.530734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.530766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.530790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.534970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.544086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.544572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.544620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.544646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.544955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.545245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.545278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.545300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.549452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.558758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.559277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.559327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.559353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.559641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.560173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.560206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.560238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.564402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.573483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.573958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.574007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.574043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.574334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.574625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.574658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.574681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.578870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.587438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:38.193 [2024-07-13 05:25:44.587584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.193 [2024-07-13 05:25:44.588064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.588556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.588604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.588631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.588932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.589231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.589263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.589286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.593498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.602667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.603185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.603235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.603262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.603550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.603841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.603880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.603905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.608082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.617164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.617657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.193 [2024-07-13 05:25:44.617708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.193 [2024-07-13 05:25:44.617734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.193 [2024-07-13 05:25:44.618036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.193 [2024-07-13 05:25:44.618327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.193 [2024-07-13 05:25:44.618359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.193 [2024-07-13 05:25:44.618382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.193 [2024-07-13 05:25:44.622546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.193 [2024-07-13 05:25:44.631636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.193 [2024-07-13 05:25:44.632150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.194 [2024-07-13 05:25:44.632192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.194 [2024-07-13 05:25:44.632223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.194 [2024-07-13 05:25:44.632510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.194 [2024-07-13 05:25:44.632801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.194 [2024-07-13 05:25:44.632833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.194 [2024-07-13 05:25:44.632876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.194 [2024-07-13 05:25:44.637086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.194 [2024-07-13 05:25:44.646151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.194 [2024-07-13 05:25:44.646638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.194 [2024-07-13 05:25:44.646687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.194 [2024-07-13 05:25:44.646714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.194 [2024-07-13 05:25:44.647030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.194 [2024-07-13 05:25:44.647320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.194 [2024-07-13 05:25:44.647353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.194 [2024-07-13 05:25:44.647376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.194 [2024-07-13 05:25:44.651523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.194 [2024-07-13 05:25:44.660816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.194 [2024-07-13 05:25:44.661298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.194 [2024-07-13 05:25:44.661349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.194 [2024-07-13 05:25:44.661381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.194 [2024-07-13 05:25:44.661671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.194 [2024-07-13 05:25:44.661972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.194 [2024-07-13 05:25:44.662004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.194 [2024-07-13 05:25:44.662037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.194 [2024-07-13 05:25:44.666199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.194 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.194 [2024-07-13 05:25:44.675337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.194 [2024-07-13 05:25:44.675850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.194 [2024-07-13 05:25:44.675910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.194 [2024-07-13 05:25:44.675938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.194 [2024-07-13 05:25:44.676228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.194 [2024-07-13 05:25:44.676517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.194 [2024-07-13 05:25:44.676549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.194 [2024-07-13 05:25:44.676573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.194 [2024-07-13 05:25:44.680735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.194 [2024-07-13 05:25:44.690018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.194 [2024-07-13 05:25:44.690574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.454 [2024-07-13 05:25:44.690625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.454 [2024-07-13 05:25:44.690652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.454 [2024-07-13 05:25:44.690953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.454 [2024-07-13 05:25:44.691269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.454 [2024-07-13 05:25:44.691313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.454 [2024-07-13 05:25:44.691348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.454 [2024-07-13 05:25:44.695634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.454 [2024-07-13 05:25:44.704549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.454 [2024-07-13 05:25:44.705050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.454 [2024-07-13 05:25:44.705099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.454 [2024-07-13 05:25:44.705127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.454 [2024-07-13 05:25:44.705428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.454 [2024-07-13 05:25:44.705728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.454 [2024-07-13 05:25:44.705760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.454 [2024-07-13 05:25:44.705793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.454 [2024-07-13 05:25:44.710021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.454 [2024-07-13 05:25:44.719205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.454 [2024-07-13 05:25:44.719695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.454 [2024-07-13 05:25:44.719746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.454 [2024-07-13 05:25:44.719773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.454 [2024-07-13 05:25:44.720077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.454 [2024-07-13 05:25:44.720371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.454 [2024-07-13 05:25:44.720403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.454 [2024-07-13 05:25:44.720427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.454 [2024-07-13 05:25:44.724616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.454 [2024-07-13 05:25:44.733837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.454 [2024-07-13 05:25:44.734339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.454 [2024-07-13 05:25:44.734387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.454 [2024-07-13 05:25:44.734414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.454 [2024-07-13 05:25:44.734706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.454 [2024-07-13 05:25:44.735009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.454 [2024-07-13 05:25:44.735054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.454 [2024-07-13 05:25:44.735077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.454 [2024-07-13 05:25:44.738913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:38.454 [2024-07-13 05:25:44.739288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.748555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.749164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.749244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.749543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.749845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.749890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.749937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.754277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.763425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.763975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.764023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.764053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.764353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.764655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.764689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.764715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.769002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.778091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.778592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.778633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.778661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.778969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.779268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.779302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.779326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.783630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.792732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.793254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.793297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.793323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.793619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.793935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.793970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.793994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.798245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.807391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.807899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.807940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.807967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.808262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.808558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.808592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.808616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.812810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.821997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.822481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.822524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.822551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.822843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.823151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.823186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.823209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.827414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.836691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.837182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.837226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.837255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.837550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.837851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.837894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.837920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.842213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.851311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.851804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.851847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.851883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.852190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.852491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.852538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.852562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.856841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.865943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.866460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.866530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.866825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.867135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.867170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.867194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.871448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.880747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.881371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.881423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.881456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.881760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.882078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.882113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.882141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.455 [2024-07-13 05:25:44.886411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.455 [2024-07-13 05:25:44.895497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.455 [2024-07-13 05:25:44.896001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.455 [2024-07-13 05:25:44.896044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.455 [2024-07-13 05:25:44.896073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.455 [2024-07-13 05:25:44.896369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.455 [2024-07-13 05:25:44.896667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.455 [2024-07-13 05:25:44.896701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.455 [2024-07-13 05:25:44.896732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.456 [2024-07-13 05:25:44.901018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.456 [2024-07-13 05:25:44.910119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.456 [2024-07-13 05:25:44.910641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.456 [2024-07-13 05:25:44.910682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.456 [2024-07-13 05:25:44.910709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.456 [2024-07-13 05:25:44.911017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.456 [2024-07-13 05:25:44.911316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.456 [2024-07-13 05:25:44.911350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.456 [2024-07-13 05:25:44.911374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.456 [2024-07-13 05:25:44.915666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.456 [2024-07-13 05:25:44.924768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.456 [2024-07-13 05:25:44.925274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.456 [2024-07-13 05:25:44.925316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.456 [2024-07-13 05:25:44.925344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.456 [2024-07-13 05:25:44.925639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.456 [2024-07-13 05:25:44.925951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.456 [2024-07-13 05:25:44.925986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.456 [2024-07-13 05:25:44.926010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.456 [2024-07-13 05:25:44.930249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.456 [2024-07-13 05:25:44.939413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.456 [2024-07-13 05:25:44.939950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.456 [2024-07-13 05:25:44.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.456 [2024-07-13 05:25:44.940021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.456 [2024-07-13 05:25:44.940317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.456 [2024-07-13 05:25:44.940612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.456 [2024-07-13 05:25:44.940647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.456 [2024-07-13 05:25:44.940672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.456 [2024-07-13 05:25:44.944893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.716 [2024-07-13 05:25:44.954322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.716 [2024-07-13 05:25:44.954844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.716 [2024-07-13 05:25:44.954895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.716 [2024-07-13 05:25:44.954926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.716 [2024-07-13 05:25:44.955221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.716 [2024-07-13 05:25:44.955516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.716 [2024-07-13 05:25:44.955551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.716 [2024-07-13 05:25:44.955575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.716 [2024-07-13 05:25:44.959900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.716 [2024-07-13 05:25:44.969110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.716 [2024-07-13 05:25:44.969622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.716 [2024-07-13 05:25:44.969665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.716 [2024-07-13 05:25:44.969693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.716 [2024-07-13 05:25:44.970002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.716 [2024-07-13 05:25:44.970301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.716 [2024-07-13 05:25:44.970335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.716 [2024-07-13 05:25:44.970359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.716 [2024-07-13 05:25:44.974637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.716 [2024-07-13 05:25:44.983720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.716 [2024-07-13 05:25:44.984218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.716 [2024-07-13 05:25:44.984261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.716 [2024-07-13 05:25:44.984289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.716 [2024-07-13 05:25:44.984584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.716 [2024-07-13 05:25:44.984896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.716 [2024-07-13 05:25:44.984931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.716 [2024-07-13 05:25:44.984955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.716 [2024-07-13 05:25:44.989232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:44.998542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:44.999026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:44.999068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:44.999095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:44.999395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:44.999691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:44.999724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:44.999748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.004007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.006751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.717 [2024-07-13 05:25:45.006801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.717 [2024-07-13 05:25:45.006837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.717 [2024-07-13 05:25:45.006859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.717 [2024-07-13 05:25:45.006894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.717 [2024-07-13 05:25:45.006990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.717 [2024-07-13 05:25:45.007040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.717 [2024-07-13 05:25:45.007050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:38.717 [2024-07-13 05:25:45.013290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.013875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.013924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.013954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.014262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.014565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.014599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.014627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.018937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.028011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.028698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.028750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.028784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.029100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.029403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.029436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.029464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.033740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.042758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.043221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.043262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.043289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.043585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.043895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.043927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.043951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.048344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.057595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.058079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.058122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.058150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.058451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.058755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.058788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.058812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.063209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.072228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.072726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.072767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.072793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.073104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.073410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.073442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.073466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.077689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.086907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.087478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.087523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.087552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.087859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.088171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.088204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.088230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.092542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.101676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.102406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.102462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.102495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.102807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.103130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.103164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.103193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.107588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.116588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.117270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.117326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.117360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.117672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.117992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.118027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.717 [2024-07-13 05:25:45.118055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.717 [2024-07-13 05:25:45.122380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.717 [2024-07-13 05:25:45.131330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.717 [2024-07-13 05:25:45.131829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.717 [2024-07-13 05:25:45.131878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.717 [2024-07-13 05:25:45.131918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.717 [2024-07-13 05:25:45.132212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.717 [2024-07-13 05:25:45.132509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.717 [2024-07-13 05:25:45.132548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.132572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.136862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.718 [2024-07-13 05:25:45.145978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.718 [2024-07-13 05:25:45.146501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.718 [2024-07-13 05:25:45.146543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.718 [2024-07-13 05:25:45.146569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.718 [2024-07-13 05:25:45.146863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.718 [2024-07-13 05:25:45.147180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.718 [2024-07-13 05:25:45.147213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.147235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.151516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.718 [2024-07-13 05:25:45.160547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.718 [2024-07-13 05:25:45.161031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.718 [2024-07-13 05:25:45.161073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.718 [2024-07-13 05:25:45.161101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.718 [2024-07-13 05:25:45.161401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.718 [2024-07-13 05:25:45.161696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.718 [2024-07-13 05:25:45.161728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.161750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.165969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.718 [2024-07-13 05:25:45.175209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.718 [2024-07-13 05:25:45.175718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.718 [2024-07-13 05:25:45.175759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.718 [2024-07-13 05:25:45.175786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.718 [2024-07-13 05:25:45.176100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.718 [2024-07-13 05:25:45.176398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.718 [2024-07-13 05:25:45.176430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.176453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.180733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.718 [2024-07-13 05:25:45.189903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.718 [2024-07-13 05:25:45.190373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.718 [2024-07-13 05:25:45.190421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.718 [2024-07-13 05:25:45.190448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.718 [2024-07-13 05:25:45.190745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.718 [2024-07-13 05:25:45.191051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.718 [2024-07-13 05:25:45.191084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.191108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.195294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.718 [2024-07-13 05:25:45.204421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.718 [2024-07-13 05:25:45.204885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.718 [2024-07-13 05:25:45.204926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.718 [2024-07-13 05:25:45.204952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.718 [2024-07-13 05:25:45.205242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.718 [2024-07-13 05:25:45.205532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.718 [2024-07-13 05:25:45.205565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.718 [2024-07-13 05:25:45.205588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.718 [2024-07-13 05:25:45.209807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.219142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.219681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.219724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.219752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.220054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.220360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.220393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.220417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.224611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.233809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.234314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.234356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.234389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.234681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.234988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.235020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.235045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.239257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.248533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.249187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.249242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.249274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.249587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.249904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.249938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.249967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.254300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.263426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.264104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.264191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.264494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.264800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.264834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.264862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.269144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.278184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.278702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.278743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.278769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.279074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.279370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.279407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.279431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.283691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.978 [2024-07-13 05:25:45.292996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.978 [2024-07-13 05:25:45.293509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.978 [2024-07-13 05:25:45.293551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.978 [2024-07-13 05:25:45.293578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.978 [2024-07-13 05:25:45.293881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.978 [2024-07-13 05:25:45.294177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.978 [2024-07-13 05:25:45.294209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.978 [2024-07-13 05:25:45.294232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.978 [2024-07-13 05:25:45.298505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.307529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.308001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.308070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.308363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.308659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.308691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.308714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.312956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.322224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.322726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.322767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.322794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.323096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.323392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.323425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.323448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.327684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.336808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.337289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.337357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.337646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.337956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.337989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.338012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.342190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.351347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.351827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.351879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.351909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.352203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.352500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.352533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.352557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.356790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.366100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.366603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.366646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.366674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.366983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.367285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.367318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.367341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.371613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.380738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.381234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.381275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.381309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.381605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.381915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.381948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.381971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.386262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.395369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.395855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.395904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.395931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.396225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.396521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.396554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.396577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.400814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.410021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.410463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.410504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.410531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.410823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.411124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.411157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.411180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.415398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.424615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.425081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.425122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.425150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.425439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.425732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.425770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.425795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.429978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.439089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.439573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.439614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.439640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.439941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.440234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.440266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.979 [2024-07-13 05:25:45.440289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.979 [2024-07-13 05:25:45.444462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.979 [2024-07-13 05:25:45.453564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.979 [2024-07-13 05:25:45.454065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.979 [2024-07-13 05:25:45.454107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.979 [2024-07-13 05:25:45.454134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.979 [2024-07-13 05:25:45.454423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.979 [2024-07-13 05:25:45.454716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.979 [2024-07-13 05:25:45.454748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.980 [2024-07-13 05:25:45.454771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.980 [2024-07-13 05:25:45.458968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.980 [2024-07-13 05:25:45.468089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.980 [2024-07-13 05:25:45.468553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.980 [2024-07-13 05:25:45.468593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:38.980 [2024-07-13 05:25:45.468621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:38.980 [2024-07-13 05:25:45.468922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:38.980 [2024-07-13 05:25:45.469215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.980 [2024-07-13 05:25:45.469247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.980 [2024-07-13 05:25:45.469271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.980 [2024-07-13 05:25:45.473518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.238 [2024-07-13 05:25:45.482881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.238 [2024-07-13 05:25:45.483376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.238 [2024-07-13 05:25:45.483418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.238 [2024-07-13 05:25:45.483462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.238 [2024-07-13 05:25:45.483753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.238 [2024-07-13 05:25:45.484056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.238 [2024-07-13 05:25:45.484089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.238 [2024-07-13 05:25:45.484112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.238 [2024-07-13 05:25:45.488284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.238 [2024-07-13 05:25:45.497386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.238 [2024-07-13 05:25:45.497835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.238 [2024-07-13 05:25:45.497890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.238 [2024-07-13 05:25:45.497920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.238 [2024-07-13 05:25:45.498211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.238 [2024-07-13 05:25:45.498504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.238 [2024-07-13 05:25:45.498535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.238 [2024-07-13 05:25:45.498558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.238 [2024-07-13 05:25:45.502773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.238 [2024-07-13 05:25:45.511883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.238 [2024-07-13 05:25:45.512358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.238 [2024-07-13 05:25:45.512400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.238 [2024-07-13 05:25:45.512427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.238 [2024-07-13 05:25:45.512717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.513021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.513054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.513077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.517238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.526046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.526513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.526552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.526583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.526887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.527153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.527198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.527220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.531045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.239 [2024-07-13 05:25:45.540226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.540667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.540705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.540730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.541007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.541306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.541334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.541354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.545199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.554447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.554864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.554908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.554933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.555209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.555467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.555496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.555516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.239 [2024-07-13 05:25:45.559292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.563291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.239 [2024-07-13 05:25:45.568650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.569141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.569180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.569205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.569488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.569734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.569762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.569782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.573578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.582856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.583423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.583461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.583485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.583773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.584076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.584105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.584126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.587951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.239 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.239 [2024-07-13 05:25:45.597063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.597578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.597620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.597647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.597984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.598279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.598309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.598333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.602340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.611323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.611947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.611995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.612025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.612314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.612581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.612611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.612635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.616487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.625580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.626037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.626075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.626099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.626377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.626638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.626665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.626686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.630505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.239 [2024-07-13 05:25:45.639721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.239 [2024-07-13 05:25:45.640171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.239 [2024-07-13 05:25:45.640208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.239 [2024-07-13 05:25:45.640233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.239 [2024-07-13 05:25:45.640513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.239 [2024-07-13 05:25:45.640773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.239 [2024-07-13 05:25:45.640801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.239 [2024-07-13 05:25:45.640821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.239 [2024-07-13 05:25:45.644588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.240 [2024-07-13 05:25:45.653789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.240 [2024-07-13 05:25:45.654226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.240 [2024-07-13 05:25:45.654263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.240 [2024-07-13 05:25:45.654291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.240 [2024-07-13 05:25:45.654565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.240 [2024-07-13 05:25:45.654822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.240 [2024-07-13 05:25:45.654850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.240 [2024-07-13 05:25:45.654896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.240 [2024-07-13 05:25:45.658674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.240 Malloc0 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.240 [2024-07-13 05:25:45.667916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.240 [2024-07-13 05:25:45.668397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.240 [2024-07-13 05:25:45.668433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.240 [2024-07-13 05:25:45.668457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.240 [2024-07-13 05:25:45.668731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.240 [2024-07-13 05:25:45.669020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.240 [2024-07-13 05:25:45.669050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.240 [2024-07-13 05:25:45.669071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.240 [2024-07-13 05:25:45.672964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.240 [2024-07-13 05:25:45.682076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.240 [2024-07-13 05:25:45.682553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.240 [2024-07-13 05:25:45.682590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:39.240 [2024-07-13 05:25:45.682613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:39.240 [2024-07-13 05:25:45.682916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:39.240 [2024-07-13 05:25:45.683215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.240 [2024-07-13 05:25:45.683243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.240 [2024-07-13 05:25:45.683263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.240 [2024-07-13 05:25:45.683573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.240 [2024-07-13 05:25:45.687120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.240 05:25:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 865020 00:36:39.240 [2024-07-13 05:25:45.696227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.497 [2024-07-13 05:25:45.824895] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:49.460 00:36:49.460 Latency(us) 00:36:49.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.460 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:49.460 Verification LBA range: start 0x0 length 0x4000 00:36:49.460 Nvme1n1 : 15.01 4416.98 17.25 8994.85 0.00 9514.46 1098.33 43690.67 00:36:49.460 =================================================================================================================== 00:36:49.460 Total : 4416.98 17.25 8994.85 0.00 9514.46 1098.33 43690.67 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:49.460 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:49.461 rmmod nvme_tcp 00:36:49.461 rmmod nvme_fabrics 00:36:49.461 rmmod nvme_keyring 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 865686 ']' 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 865686 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 865686 ']' 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 865686 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865686 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865686' 00:36:49.461 killing process with pid 865686 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 865686 00:36:49.461 05:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 865686 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:50.835 05:25:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.735 05:25:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:52.735 00:36:52.735 real 0m26.750s 00:36:52.735 user 1m11.522s 00:36:52.735 sys 0m5.466s 00:36:52.735 05:25:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:52.735 05:25:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:52.735 ************************************ 00:36:52.735 END TEST nvmf_bdevperf 00:36:52.735 ************************************ 00:36:52.735 05:25:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:52.735 05:25:59 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:52.735 05:25:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:52.735 05:25:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:52.735 05:25:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:52.735 ************************************ 00:36:52.735 START TEST nvmf_target_disconnect 00:36:52.735 ************************************ 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:52.735 * Looking for test storage... 00:36:52.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.735 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:52.736 05:25:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:55.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:55.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.266 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:55.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:55.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:55.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:55.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:36:55.267 00:36:55.267 --- 10.0.0.2 ping statistics --- 00:36:55.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.267 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:55.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:55.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:36:55.267 00:36:55.267 --- 10.0.0.1 ping statistics --- 00:36:55.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.267 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:55.267 ************************************ 00:36:55.267 START TEST nvmf_target_disconnect_tc1 00:36:55.267 ************************************ 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:55.267 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:55.267 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.267 [2024-07-13 05:26:01.519472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.267 [2024-07-13 05:26:01.519597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2000 with addr=10.0.0.2, port=4420 00:36:55.267 [2024-07-13 05:26:01.519691] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:55.267 [2024-07-13 05:26:01.519727] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:55.268 [2024-07-13 05:26:01.519752] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:55.268 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:55.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:55.268 Initializing NVMe Controllers 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.268 00:36:55.268 real 0m0.210s 00:36:55.268 user 0m0.085s 00:36:55.268 sys 0m0.125s 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:55.268 ************************************ 00:36:55.268 END TEST nvmf_target_disconnect_tc1 00:36:55.268 ************************************ 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:55.268 ************************************ 00:36:55.268 START TEST nvmf_target_disconnect_tc2 00:36:55.268 ************************************ 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=869096 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 869096 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 869096 ']' 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:55.268 05:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.268 [2024-07-13 05:26:01.708125] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:55.268 [2024-07-13 05:26:01.708297] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.525 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.525 [2024-07-13 05:26:01.843279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:55.783 [2024-07-13 05:26:02.074921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:55.783 [2024-07-13 05:26:02.074999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:55.783 [2024-07-13 05:26:02.075022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:55.783 [2024-07-13 05:26:02.075040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:55.783 [2024-07-13 05:26:02.075059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:55.783 [2024-07-13 05:26:02.075199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:55.783 [2024-07-13 05:26:02.075323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:55.783 [2024-07-13 05:26:02.075365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:55.783 [2024-07-13 05:26:02.075390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 Malloc0 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 [2024-07-13 05:26:02.720808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 [2024-07-13 05:26:02.750729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.349 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.350 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=869250 00:36:56.350 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:56.350 05:26:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:56.607 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.520 05:26:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 869096 00:36:58.520 05:26:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.520 starting I/O failed 00:36:58.520 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 [2024-07-13 05:26:04.790930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 [2024-07-13 05:26:04.791609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Write completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 Read completed with error (sct=0, sc=8) 00:36:58.521 starting I/O failed 00:36:58.521 [2024-07-13 05:26:04.792277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.521 [2024-07-13 05:26:04.792548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.792589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.792825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.792890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.793069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.793103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.793257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.793296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.793453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.793502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.793769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.793820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.794005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.794040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.794230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.794278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.794458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.521 [2024-07-13 05:26:04.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.521 qpair failed and we were unable to recover it. 00:36:58.521 [2024-07-13 05:26:04.794626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.794657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.794847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.794886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.795068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.795278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.795315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.795530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.795583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.795759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.795791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.795972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.796006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.796178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.796213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.796468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.796643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.796693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.796918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.796952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.797094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.797128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.797305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.797337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.797565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.797598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.797760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.797792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.797972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.798005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.798156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.798189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.798346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.798378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.798518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.798549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.798784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.798817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.798977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.799008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Read completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 Write completed with error (sct=0, sc=8) 00:36:58.522 starting I/O failed 00:36:58.522 [2024-07-13 05:26:04.799654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:58.522 [2024-07-13 05:26:04.799831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.799910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.800091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.800127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.800316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.800353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.800658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.800715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.800922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.801122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.801172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.801399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.801431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.801557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.801589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.522 qpair failed and we were unable to recover it. 00:36:58.522 [2024-07-13 05:26:04.801768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.522 [2024-07-13 05:26:04.801803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.802035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.802067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.802252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.802316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.802675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.802735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.802901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.802936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.803102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.803135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.803329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.803363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.803647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.803704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.803857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.803896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.804041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.804073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.804251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.804300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.804543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.804598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.804774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.804828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.805054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.805102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.805338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.805386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.805586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.805622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.805768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.805802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.806011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.806046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.806194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.806227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.806423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.806457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.806616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.806649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.806806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.806859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.807043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.807078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.807251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.807284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.807450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.807501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.807773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.807839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.808038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.808071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.808267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.808300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.808460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.808493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.808652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.808685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.808875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.808924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.809073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.809109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.809293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.809327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.809488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.809521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.809679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.809712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.809881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.809915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.810077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.810127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.810435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.810505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.810800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.810836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.811028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.811062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.811230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.523 [2024-07-13 05:26:04.811264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.523 qpair failed and we were unable to recover it. 00:36:58.523 [2024-07-13 05:26:04.811502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.811536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.811703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.811737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.811895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.811948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.812141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.812175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.812338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.812372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.812534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.812567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.812701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.812734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.812886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.812935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.813143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.813196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.813419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.813457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.813646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.813679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.813823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.813856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.814041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.814073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.814357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.814414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.814604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.814637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.814801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.814834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.814992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.815028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.815195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.815230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.815420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.815453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.815632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.815669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.815851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.815891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.816042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.816076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.816238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.816272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.816430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.816463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.816709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.816776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.816964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.817167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.817200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.817388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.817425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.817586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.817619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.817779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.817824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.817969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.818159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.818352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.818544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.818766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.818964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.818997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.819157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.819190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.819353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.819386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.819524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.819557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.819747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.819797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.820029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.524 [2024-07-13 05:26:04.820076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.524 qpair failed and we were unable to recover it. 00:36:58.524 [2024-07-13 05:26:04.820247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.820530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.820564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.820747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.820785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.820968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.821003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.821135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.821184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.821403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.821437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.821599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.821633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.821800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.821833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.822032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.822079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.822271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.822306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.822499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.822549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.822844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.822912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.823100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.823133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.823296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.823329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.823469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.823520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.823695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.823728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.823906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.823943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.824080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.824116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.824279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.824311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.824450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.824483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.824684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.824720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.824877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.824910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.825051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.825084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.825245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.825282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.825474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.825507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.825682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.825717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.825953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.826181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.826352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.826544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.826719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.826900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.826935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.827194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.525 [2024-07-13 05:26:04.827250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.525 qpair failed and we were unable to recover it. 00:36:58.525 [2024-07-13 05:26:04.827437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.827471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.827610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.827642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.827825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.827861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.828056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.828089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.828248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.828280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.828467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.828503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.828647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.828679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.828841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.828881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.829013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.829046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.829196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.829229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.829417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.829450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.829631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.829668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.829854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.829898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.830053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.830089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.830290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.830327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.830502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.830534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.830744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.830779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.830993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.831029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.831188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.831220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.831405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.831442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.831719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.831755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.831921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.831954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.832095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.832128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.832291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.832326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.832525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.832557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.832741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.832777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.832999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.833053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.833227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.833263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.833427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.833461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.833606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.833639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.833826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.833874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.834066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.834104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.834288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.834325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.834490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.834732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.834770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.834970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.835023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.835192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.835227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.835391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.835430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.835588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.835624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.835814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.835847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.526 [2024-07-13 05:26:04.836030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.526 [2024-07-13 05:26:04.836064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.526 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.836195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.836227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.836357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.836390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.836523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.836585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.836746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.836784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.836945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.836978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.837150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.837185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.837386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.837422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.837626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.837659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.837876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.837913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.838065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.838100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.838280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.838313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.838495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.838530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.838710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.838745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.838903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.838935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.839067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.839119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.839303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.839336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.839517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.839549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.839704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.839740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.839885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.839937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.840075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.840107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.840245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.840278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.840466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.840501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.840660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.840692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.840853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.840904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.841043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.841091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.841277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.841308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.841446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.841478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.841640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.841690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.841852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.841893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.842103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.842143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.842348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.842383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.842587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.842619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.842821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.842857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.843065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.843100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.843257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.843290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.843459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.843495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.843712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.843744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.843929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.843961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.844146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.844182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.844399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.844454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.527 qpair failed and we were unable to recover it. 00:36:58.527 [2024-07-13 05:26:04.844644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.527 [2024-07-13 05:26:04.844676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.844841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.845046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.845096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.845314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.845347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.845527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.845560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.845701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.845735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.845963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.845995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.846130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.846162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.846369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.846404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.846598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.846631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.846816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.846852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.847088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.847141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.847341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.847378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.847570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.847609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.847786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.847824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.847988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.848022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.848245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.848297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.848590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.848648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.848827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.848860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.849034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.849068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.849276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.849313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.849483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.849517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.849744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.849804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.849988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.850022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.850207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.850240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.850566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.850632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.850846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.850888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.851054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.851086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.851294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.851368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.851627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.851687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.851884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.851918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.852080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.852114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.852303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.852340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.852533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.852567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.852755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.852794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.852982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.853018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.853178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.853211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.853465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.853528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.853861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.853926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.854131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.854164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.854396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.528 [2024-07-13 05:26:04.854456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.528 qpair failed and we were unable to recover it. 00:36:58.528 [2024-07-13 05:26:04.854702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.854798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.854991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.855024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.855219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.855306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.855498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.855538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.855717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.855751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.855903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.855958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.856163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.856214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.856401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.856447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.856694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.856750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.856940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.856974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.857117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.857151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.857323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.857358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.857534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.857570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.857748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.857781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.857969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.858007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.858210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.858252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.858465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.858497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.858748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.858807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.858983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.859016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.859181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.859213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.859345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.859379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.859510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.859543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.859709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.859746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.859953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.860001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.860187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.860226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.860418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.860452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.860641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.860680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.860861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.860908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.861097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.861131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.861350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.861415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.861628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.861662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.861820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.861852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.862062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.862098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.862267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.862490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.529 [2024-07-13 05:26:04.862523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.529 qpair failed and we were unable to recover it. 00:36:58.529 [2024-07-13 05:26:04.862814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.862878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.863062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.863099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.863290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.863323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.863519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.863555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.863743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.863775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.863950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.863983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.864166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.864205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.864393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.864432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.864620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.864654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.864820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.864853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.865021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.865059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.865275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.865309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.865595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.865652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.865839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.865878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.866044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.866078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.866270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.866331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.866523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.866555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.866740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.866772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.866959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.866998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.867195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.867228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.867416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.867682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.867739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.867935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.867970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.868107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.868141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.868323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.868361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.868519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.868556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.868734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.868766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.868975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.869012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.869192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.869225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.869400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.869432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.869742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.869812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.870007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.870040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.870184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.870216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.870407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.870440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.870635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.870672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.870854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.870892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.871091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.871144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.871316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.871356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.871534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.871568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.871775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.871812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.530 [2024-07-13 05:26:04.871996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.530 [2024-07-13 05:26:04.872030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.530 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.872174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.872208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.872390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.872427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.872645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.872683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.872897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.872941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.873121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.873158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.873315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.873351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.873565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.873598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.873774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.873810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.873971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.874004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.874146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.874179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.874319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.874353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.874518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.874551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.874747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.874779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.875008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.875041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.875210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.875244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.875378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.875411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.875597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.875630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.875878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.876069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.876102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.876263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.876301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.876462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.876494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.876678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.876710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.876911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.876965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.877180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.877219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.877377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.877412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.877581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.877615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.877753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.877786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.877949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.877984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.878123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.878157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.878367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.878405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.878595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.878630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.878776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.878812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.878959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.878993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.879134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.879167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.879300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.879333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.879508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.879545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.879706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.879740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.879876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.879928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.880107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.880143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.880333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.531 [2024-07-13 05:26:04.880366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.531 qpair failed and we were unable to recover it. 00:36:58.531 [2024-07-13 05:26:04.880506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.880538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.880699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.880732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.880930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.880963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.881129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.881184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.881341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.881374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.881537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.881571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.881713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.881746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.881950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.881987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.882154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.882186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.882349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.882383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.882540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.882573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.882734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.882767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.882948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.883002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.883219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.883259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.883469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.883503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.883683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.883720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.883916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.883951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.884127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.884162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.884409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.884447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.884629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.884673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.884859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.884900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.885066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.885101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.885320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.885357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.885569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.885602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.885770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.885808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.886021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.886055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.886242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.886276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.886452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.886535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.886715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.886752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.886962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.886996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.887225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.887467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.887507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.887661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.887861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.887905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.888070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.888103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.888250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.888285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.888517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.888575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.888749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.888786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.888971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.889005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.889217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.889255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.532 [2024-07-13 05:26:04.889465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.532 [2024-07-13 05:26:04.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.532 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.889689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.889728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.889885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.889923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.890140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.890173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.890304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.890336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.890557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.890615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.890838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.890879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.891045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.891077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.891236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.891283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.891512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.891553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.891747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.891783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.891974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.892008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.892183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.892235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.892438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.892719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.892778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.892992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.893030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.893260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.893527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.893622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.893803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.893840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.894035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.894073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.894260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.894297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.894501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.894538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.894694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.894727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.894921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.894960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.895143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.895180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.895360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.895393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.895554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.895593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.895810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.895848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.896043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.896076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.896259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.896295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.896468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.896505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.896844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.896917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.897105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.897377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.897413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.897571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.897604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.897778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.533 [2024-07-13 05:26:04.897813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.533 qpair failed and we were unable to recover it. 00:36:58.533 [2024-07-13 05:26:04.897981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.898015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.898177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.898211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.898468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.898524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.898715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.898749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.898910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.898944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.899163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.899200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.899406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.899442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.899600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.899633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.899760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.899810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.900006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.900039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.900183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.900216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.900397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.900434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.900611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.900648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.900835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.901057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.901095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.901293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.901330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.901510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.901544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.901726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.901764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.901914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.901951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.902112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.902146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.902322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.902355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.902521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.902573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.902782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.902816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.903007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.903065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.903254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.903293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.903501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.903535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.903822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.903862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.904088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.904125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.904305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.904339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.904476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.904526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.904715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.904748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.904937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.904972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.905121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.905168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.905365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.905401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.905570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.905604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.905755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.905792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.905970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.906007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.906169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.906202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.906338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.906393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.906564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.906597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.534 [2024-07-13 05:26:04.906781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.534 [2024-07-13 05:26:04.906814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.534 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.907013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.907051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.907226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.907263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.907445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.907479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.907635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.907668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.907846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.907891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.908082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.908115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.908309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.908373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.908544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.908581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.908757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.908792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.908961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.909000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.909192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.909225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.909384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.909418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.909627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.909879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.909914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.910077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.910110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.910248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.910280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.910439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.910489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.910709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.910944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.910991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.911180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.911221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.911387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.911422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.911633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.911671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.911828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.911879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.912068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.912102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.912281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.912319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.912497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.912714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.912746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.912930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.912969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.913157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.913195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.913376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.913409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.913626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.913692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.913871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.913909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.914068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.914101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.914263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.914296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.914461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.914494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.914631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.914663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.914806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.914840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.915035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.915237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.915270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.915440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.535 [2024-07-13 05:26:04.915685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.535 [2024-07-13 05:26:04.915722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.535 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.915888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.915933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.916145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.916183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.916361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.916397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.916584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.916616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.916799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.916836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.917002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.917035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.917215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.917248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.917531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.917781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.917994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.918028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.918232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.918280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.918493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.918531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.918726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.918760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.918959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.918997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.919210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.919244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.919414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.919447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.919679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.919717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.919932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.919977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.920178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.920212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.920397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.920434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.920618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.920652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.920812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.920851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.921082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.921121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.921266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.921303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.921510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.921543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.921724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.921761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.921940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.921978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.922141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.922174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.922332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.922382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.922597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.922631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.922812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.922845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.923041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.923081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.923287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.923324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.923511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.923545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.923706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.923739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.923916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.923955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.924140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.924173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.924354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.924392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.924566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.924602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.924768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.536 [2024-07-13 05:26:04.924802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.536 qpair failed and we were unable to recover it. 00:36:58.536 [2024-07-13 05:26:04.925006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.925044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.925226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.925264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.925455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.925490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.925667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.925704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.925883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.925921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.926089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.926123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.926303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.926341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.926519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.926556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.926744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.926777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.926959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.926996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.927177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.927209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.927370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.927403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.927685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.927722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.927935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.927979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.928142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.928175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.928400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.928432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.928644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.928680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.928855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.928893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.929051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.929087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.929250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.929283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.929453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.929487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.929663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.929705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.929881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.929917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.930120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.930153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.930397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.930454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.930656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.930692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.930854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.930898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.931079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.931116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.931298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.931335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.931522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.931555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.931730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.931767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.931940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.931977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.932158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.932191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.932437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.932495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.932696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.932733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.932899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.932933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.933102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.933135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.933305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.933339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.933511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.933544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.933687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.933719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.933916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.537 [2024-07-13 05:26:04.933953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.537 qpair failed and we were unable to recover it. 00:36:58.537 [2024-07-13 05:26:04.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.934143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.934273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.934305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.934479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.934516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.934749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.934785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.934948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.934982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.935164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.935200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.935387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.935419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.935583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.935617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.935804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.935841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.936028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.936202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.936234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.936401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.936434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.936631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.936664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.936873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.936909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.937090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.937126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.937296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.937329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.937492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.937524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.937664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.937716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.937897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.937931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.938112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.938149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.938321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.938362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.938571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.938603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.938752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.938788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.938955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.939155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.939188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.939353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.939386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.939542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.939575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.939779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.939813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.940029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.940065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.940224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.940256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.940418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.940451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.940640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.940676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.940851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.940897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.941108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.941141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.941309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.941345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.538 [2024-07-13 05:26:04.941555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.538 [2024-07-13 05:26:04.941592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.538 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.941752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.941798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.941958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.942010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.942186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.942222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.942407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.942440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.942577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.942610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.942796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.942829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.943045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.943078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.943257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.943294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.943466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.943502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.943674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.943711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.943927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.943960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.944189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.944242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.944419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.944454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.944617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.944650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.944819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.944858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.945031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.945065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.945270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.945307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.945564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.945622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.945815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.945848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.946039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.946076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.946254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.946291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.946469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.946502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.946696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.946733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.946937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.946974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.947127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.947164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.947339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.947376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.947560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.947597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.947780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.947813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.948009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.948045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.948215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.948252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.948463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.948496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.948673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.948710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.948909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.948946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.949127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.949160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.949302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.949336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.949517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.949568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.949776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.949808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.949999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.950036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.950227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.950263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.950441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.950474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.539 qpair failed and we were unable to recover it. 00:36:58.539 [2024-07-13 05:26:04.950631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.539 [2024-07-13 05:26:04.950668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.950877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.950910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.951040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.951073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.951255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.951291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.951539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.951595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.951793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.951836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.952034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.952067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.952286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.952319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.952477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.952510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.952645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.952835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.953056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.953089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.953249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.953281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.953452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.953489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.953647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.953681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.953816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.953873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.954078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.954114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.954292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.954325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.954489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.954522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.954722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.954759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.954914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.954948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.955071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.955104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.955325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.955362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.955568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.955601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.955804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.955846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.956045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.956078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.956293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.956327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.956500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.956547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.956723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.956760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.956974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.957008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.957194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.957230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.957407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.957443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.957628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.957660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.957870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.957907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.958054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.958091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.958305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.958337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.958499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.958537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.958722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.958772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.958995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.959028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.959162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.959195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.959349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.540 [2024-07-13 05:26:04.959382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.540 qpair failed and we were unable to recover it. 00:36:58.540 [2024-07-13 05:26:04.959537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.959569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.959729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.959763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.959953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.959990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.960136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.960169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.960371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.960407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.960617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.960654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.960831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.960869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.961023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.961060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.961238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.961271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.961399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.961432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.961639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.961692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.961884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.961925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.962104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.962138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.962310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.962344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.962507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.962540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.962698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.962731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.962900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.962939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.963141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.963178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.963339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.963373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.963535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.963572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.963782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.963818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.963985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.964018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.964213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.964270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.964448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.964485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.964673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.964706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.964892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.964929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.965131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.965168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.965350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.965383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.965565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.965623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.965772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.965808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.965980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.966013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.966220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.966257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.966434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.966470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.966655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.966688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.966885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.967102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.967135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.967363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.967396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.967552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.967588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.967759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.967795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.967981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.968014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.541 [2024-07-13 05:26:04.968195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.541 [2024-07-13 05:26:04.968231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.541 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.968402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.968435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.968593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.968630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.968817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.968853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.969038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.969074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.969231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.969264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.969453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.969517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.969723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.969760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.969945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.969978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.970197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.970250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.970472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.970518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.970716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.970750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.970958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.970997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.971150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.971187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.971373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.971407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.971559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.971597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.971799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.971836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.972058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.972092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.972290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.972329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.972529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.972566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.972758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.972791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.972969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.973007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.973157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.973195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.973376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.973409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.973646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.973706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.973895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.973929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.974090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.974124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.974309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.974347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.974542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.974576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.974707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.974739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.974941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.974980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.975123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.975170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.975338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.975370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.975532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.975566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.975748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.975785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.975953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.542 qpair failed and we were unable to recover it. 00:36:58.542 [2024-07-13 05:26:04.976149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.542 [2024-07-13 05:26:04.976201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.976383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.976420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.976608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.976642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.976825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.976863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.977022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.977059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.977252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.977286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.977464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.977530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.977759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.977798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.977991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.978025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.978212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.978249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.978418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.978454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.978607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.978641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.978784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.978818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.978973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.979008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.979170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.979208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.979374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.979409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.979584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.979618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.979755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.979789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.980002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.980040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.980216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.980253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.980427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.980460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.980725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.980783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.980974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.981011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.981190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.981226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.981410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.981443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.981633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.981690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.981877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.981910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.982077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.982110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.982273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.982306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.982479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.982538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.982688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.982725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.982874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.982911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.983071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.983104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.983283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.983320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.983534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.983567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.983753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.983790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.983951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.983985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.984148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.984180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.984369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.984402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.984566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.984602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.984834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.543 [2024-07-13 05:26:04.984876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.543 qpair failed and we were unable to recover it. 00:36:58.543 [2024-07-13 05:26:04.985080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.985113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.985318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.985355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.985526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.985562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.985747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.985781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.985939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.985977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.986163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.986199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.986378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.986414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.986571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.986604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.986767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.986800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.987023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.987060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.987232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.987268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.987420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.987453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.987609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.987642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.987828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.987878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.988033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.988070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.988279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.988311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.988552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.988590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.988771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.988807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.988979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.989012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.989144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.989177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.989349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.989385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.989602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.989635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.989810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.989846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.990015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.990048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.990202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.990240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.990533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.990591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.990790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.990826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.991052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.991086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.991269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.991305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.991556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.991617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.991836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.991892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.992061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.992094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.992253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.992287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.992437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.992490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.992686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.992720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.992884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.992918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.993072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.993108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.993328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.993388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.993565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.993602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.993779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.544 [2024-07-13 05:26:04.993812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.544 qpair failed and we were unable to recover it. 00:36:58.544 [2024-07-13 05:26:04.993979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.994016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.994233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.994292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.994462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.994498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.994682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.994714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.994891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.994925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.995063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.995096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.995275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.995311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.995468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.995501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.995671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.995707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.995885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.995922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.996106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.996142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.996359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.996391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.996573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.996611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.996796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.996871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.997050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.997084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.997254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.997286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.997468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.997504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.997710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.997745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.997887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.997924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.998130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.998163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.998305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.998341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.998519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.998555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.998730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.998766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.998960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.998993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.999162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.999199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.999397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.999466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.999658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.999692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:04.999835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:04.999873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.000008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.000040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.000222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.000259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.000438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.000474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.000663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.000697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.000839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.000878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.001019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.001052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.001213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.001264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.001473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.001657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.001706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.001926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.001966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.002144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.002180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.002366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.002398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.545 [2024-07-13 05:26:05.002621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.545 [2024-07-13 05:26:05.002667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.545 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.002863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.002904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.003081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.003118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.003304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.003336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.003554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.003777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.003816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.003994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.004030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.004189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.004223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.004429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.004466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.004660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.004697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.004899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.004936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.005122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.005159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.546 [2024-07-13 05:26:05.005364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.546 [2024-07-13 05:26:05.005398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.546 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.005576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.005614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.005773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.005819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.006008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.006041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.006203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.006251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.006520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.006585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.006800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.006843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.007035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.007084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.007303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.007510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.007571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.007772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.007822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.008045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.008083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.008241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.825 [2024-07-13 05:26:05.008278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-07-13 05:26:05.008430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.008466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.008638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.008673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.008882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.008916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.009110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.009146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.009391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.009452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.009625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.009662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.009818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.009851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.010054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.010104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.010289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.010325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.010476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.010511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.010699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.010731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.010891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.010942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.011135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.011168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.011327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.011379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.011533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.011566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.011760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.011795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.011961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.011999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.012170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.012207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.012390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.012422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.012560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.012592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.012757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.012789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.012954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.012991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.013150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.013183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.013358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.013393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.013553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.013590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.013778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.013812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.013982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.014015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.014206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.014242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.014397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.014438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.014610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.014646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.014853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.014892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.015043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.015077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.015312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.015370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.015573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.015608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.015791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.015822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.015982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.016017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.016205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.016240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.016389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.016423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.016604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.016637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.016818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.016855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-07-13 05:26:05.017019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.826 [2024-07-13 05:26:05.017054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.017202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.017236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.017422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.017464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.017633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.017666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.017839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.017876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.018054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.018243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.018440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.018649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.018989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.019021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.019176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.019208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.019415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.019448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.019639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.019682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.019815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.019847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.020252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.020423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.020627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.020805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.020984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.021146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.021325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.021519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.021728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.021932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.021965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.022122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.022160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.022281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.022330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.022483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.022524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.022688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.022720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.022858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.022896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.023087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.023120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.023268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.023300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.023468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.023500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.023662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.023694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.023882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.023916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.024104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.024141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.024339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.024375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.024560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.024592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.024761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.024793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.827 [2024-07-13 05:26:05.024940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.827 [2024-07-13 05:26:05.024973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.827 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.025134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.025183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.025403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.025436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.025622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.025654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.025824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.026012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.026047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.026255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.026288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.026514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.026546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.026712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.026745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.026924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.026964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.027119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.027151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.027319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.027351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.027523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.027598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.027771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.027807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.028033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.028066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.028244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.028277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.028423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.028455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.028624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.028659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.028844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.028881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.029023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.029056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.029196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.029228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.029393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.029430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.029631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.029664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.029853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.029897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.030112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.030149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.030347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.030380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.030540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.030572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.030730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.030761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.030949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.030991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.031148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.031184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.031352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.031384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.031545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.031576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.031747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.031780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.031946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.031998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.032168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.032201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.032397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.032429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.032636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.032669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.032836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.032894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.033055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.033087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.033327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.033363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.033613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.828 [2024-07-13 05:26:05.033671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.828 qpair failed and we were unable to recover it. 00:36:58.828 [2024-07-13 05:26:05.033848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.033892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.034089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.034262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.034460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.034651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.034824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.034999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.035176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.035400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.035570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.035755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.035934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.035967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.036133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.036166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.036322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.036356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.036571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.036606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.036760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.036796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.036981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.037018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.037202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.037235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.037400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.037432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.037594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.037627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.037786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.037822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.037988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.038022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.038208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.038240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.038440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.038476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.038644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.038679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.038872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.038906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.039076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.039108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.039288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.039488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.039526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.039687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.039718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.039902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.039935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.040133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.040170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.040351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.040568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.040601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.040738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.040769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.829 [2024-07-13 05:26:05.040970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.829 [2024-07-13 05:26:05.041003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.829 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.041167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.041200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.041359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.041392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.041609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.041646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.041829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.041871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.042026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.042061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.042260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.042292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.042477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.042509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.042677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.042710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.042848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.042893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.043028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.043061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.043331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.043364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.043596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.043633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.043782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.043818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.044005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.044047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.044207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.044239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.044492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.044543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.044718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.044755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.044937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.044970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.045111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.045144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.045297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.045330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.045492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.045524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.045745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.045781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.045974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.046007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.046311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.046380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.046581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.046630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.046825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.046858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.047028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.047068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.047282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.047320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.047475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.047512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.047691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.047726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.047879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.047914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.048224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.048286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.048463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.048499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.048688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.048721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.048895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.048929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.049173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.049206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.049393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.049426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.049567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.830 [2024-07-13 05:26:05.049600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.830 qpair failed and we were unable to recover it. 00:36:58.830 [2024-07-13 05:26:05.049783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.049821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.050019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.050057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.050241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.050278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.050489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.050521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.050677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.050710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.050896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.050934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.051199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.051236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.051423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.051456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.051593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.051627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.051772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.051805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.051969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.052003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.052142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.052175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.052330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.052369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.052601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.052634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.052792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.052824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.052999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.053032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.053192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.053225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.053416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.053634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.053671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.053828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.053860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.054001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.054042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.054245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.054278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.054443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.054477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.054606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.054639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.054801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.054834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.055004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.055038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.055172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.055205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.055358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.055391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.055556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.055590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.055771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.055808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.056950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.056983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.057154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.057186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.057328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.057361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.057530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.057562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.057729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.057761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.831 qpair failed and we were unable to recover it. 00:36:58.831 [2024-07-13 05:26:05.057900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.831 [2024-07-13 05:26:05.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.058074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.058106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.058264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.058298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.058480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.058513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.058705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.058736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.058887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.058921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.059049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.059081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.059256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.059289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.059454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.059486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.059638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.059681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.059817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.059862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.060909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.060942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.061094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.061127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.061287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.061320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.061505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.061537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.061711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.061743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.061889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.061922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.062052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.062085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.062247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.062280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.062453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.062486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.062631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.062664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.062828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.062860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.063035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.063067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.063223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.063255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.063401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.063434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.063657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.063694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.063914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.063947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.064109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.064142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.064287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.064325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.064490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.064523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.064662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.064693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.064886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.064920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.065117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.065150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.065313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.065346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.065530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.065562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.832 [2024-07-13 05:26:05.065697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.832 [2024-07-13 05:26:05.065729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.832 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.065900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.065933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.066120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.066153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.066289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.066321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.066462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.066494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.066666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.066699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.066872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.066904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.067045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.067077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.067219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.067253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.067426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.067459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.067591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.067623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.067783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.067816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.068013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.068050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.068237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.068273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.068476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.068667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.068699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.068836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.068873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.069014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.069048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.069249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.069282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.069443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.069474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.069639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.069671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.069861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.069897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.070967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.070999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.071187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.071219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.071389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.071422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.071561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.071594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.071768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.071804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.071978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.072177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.072378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.072582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.072778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.072960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.072997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.073156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.073188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.073386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.073418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.073560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.833 [2024-07-13 05:26:05.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.833 qpair failed and we were unable to recover it. 00:36:58.833 [2024-07-13 05:26:05.073753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.073786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.073949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.073983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.074143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.074175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.074359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.074391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.074575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.074611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.074791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.074827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.075017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.075049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.075211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.075243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.075507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.075564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.075754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.075786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.075953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.075985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.076120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.076152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.076342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.076375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.076579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.076615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.076807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.076839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.077008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.077040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.077180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.077232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.077379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.077416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.077599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.077635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.077848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.077897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.078067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.078101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.078272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.078304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.078465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.078497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.078655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.078687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.078889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.078927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.079106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.079142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.079327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.079494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.079525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.079694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.079727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.079861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.079901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.080058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.080090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.080263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.080295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.080462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.080496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.080689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.080721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.834 [2024-07-13 05:26:05.080860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.834 [2024-07-13 05:26:05.080898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.834 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.081064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.081101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.081305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.081338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.081475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.081508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.081694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.081727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.081893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.081926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.082069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.082102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.082275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.082455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.082487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.082670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.082702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.082842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.082879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.083053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.083089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.083281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.083314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.083474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.083510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.083671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.083703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.083871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.083905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.084062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.084095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.084254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.084286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.084458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.084490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.084618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.084650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.084810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.084843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.085051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.085085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.085256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.085289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.085449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.085491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.085621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.085654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.085850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.085893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.086036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.086069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.086251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.086283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.086413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.086446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.086629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.086661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.086856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.086894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.087898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.087931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.088088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.088284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.088317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.088520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.088553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.088698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.835 [2024-07-13 05:26:05.088731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.835 qpair failed and we were unable to recover it. 00:36:58.835 [2024-07-13 05:26:05.088887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.088920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.089080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.089112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.089280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.089313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.089474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.089508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.089680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.089712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.089879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.089912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.090067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.090100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.090291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.090324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.090462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.090495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.090632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.090665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.090847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.090891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.091103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.091136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.091303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.091336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.091594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.091651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.091878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.092093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.092126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.092309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.092342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.092623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.092659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.092892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.092926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.093065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.093102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.093289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.093322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.093490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.093559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.093737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.093952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.093985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.094153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.094191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.094364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.094397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.094527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.094560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.094717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.094750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.094911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.094944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.095121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.095281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.095441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.095637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.095966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.096000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.836 [2024-07-13 05:26:05.096161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.836 [2024-07-13 05:26:05.096195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.836 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.096349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.096382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.096533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.096748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.096785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.096962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.096996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.097172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.097481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.097541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.097697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.097734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.097924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.097958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.098118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.098154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.098397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.098430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.098564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.098624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.098801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.098833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.099012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.099048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.099209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.099246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.099447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.099483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.099678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.099712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.099891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.099928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.100084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.100120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.100289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.100323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.100474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.100507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.100643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.100676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.100839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.100910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.101088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.101124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.101311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.101344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.101498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.101534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.101726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.101759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.101921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.101955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.102113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.102145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.102350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.102391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.102549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.102586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.102760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.102793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.102964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.102997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.103135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.103168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.103331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.103364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.103570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.103607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.103812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.837 [2024-07-13 05:26:05.103847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.837 qpair failed and we were unable to recover it. 00:36:58.837 [2024-07-13 05:26:05.104046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.104078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.104307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.104344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.104487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.104524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.104689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.104722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.104883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.104933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.105088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.105123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.105305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.105341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.105518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.105551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.105713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.105745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.105888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.105922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.106080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.106113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.106281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.106314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.106498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.106534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.106743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.106778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.106933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.106969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.107132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.107165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.107372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.107408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.107588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.107625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.107770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.107805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.108035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.108201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.108381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.108599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.108790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.108973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.109010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.109167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.109204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.109347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.109382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.109574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.109606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.109781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.109816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.109988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.110025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.110209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.110242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.110427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.110460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.110641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.838 [2024-07-13 05:26:05.110682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.838 qpair failed and we were unable to recover it. 00:36:58.838 [2024-07-13 05:26:05.110860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.110904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.111075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.111111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.111301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.111333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.111497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.111529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.111725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.111762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.111961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.112022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.112180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.112212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.112392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.112429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.112602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.112638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.112859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.112916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.113074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.113268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.113321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.113473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.113509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.113711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.113747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.839 [2024-07-13 05:26:05.113954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.839 [2024-07-13 05:26:05.113986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.839 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.114146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.114178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.114316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.114365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.114518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.114554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.114758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.114789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.114972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.115008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.115222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.115280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.115456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.115492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.115674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.840 [2024-07-13 05:26:05.115707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.840 qpair failed and we were unable to recover it. 00:36:58.840 [2024-07-13 05:26:05.115912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.115949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.116096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.116132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.116282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.116318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.116506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.116538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.116731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.116767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.116944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.116982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.117159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.117196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.117407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.841 [2024-07-13 05:26:05.117440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.841 qpair failed and we were unable to recover it. 00:36:58.841 [2024-07-13 05:26:05.117624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.117661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.117838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.117879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.118056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.118091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.118306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.118338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.118478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.118512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.118673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.118722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.118893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.119114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.119146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.842 [2024-07-13 05:26:05.119325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.842 [2024-07-13 05:26:05.119366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.842 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.119568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.119604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.119781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.119817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.119991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.120024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.120159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.120191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.120317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.120349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.120533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.120565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.120766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.120799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.120990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.121027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.121301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.121362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.121520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.121553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.121714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.121749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.121965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.121997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.122132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.122182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.122359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.122395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.122552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.122586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.122747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.122796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.122949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.122987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.843 [2024-07-13 05:26:05.123191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.843 [2024-07-13 05:26:05.123227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.843 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.123405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.844 [2024-07-13 05:26:05.123437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.844 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.123580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.844 [2024-07-13 05:26:05.123616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.844 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.123786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.844 [2024-07-13 05:26:05.123821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.844 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.124015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.844 [2024-07-13 05:26:05.124050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.844 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.124208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.844 [2024-07-13 05:26:05.124240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.844 qpair failed and we were unable to recover it. 00:36:58.844 [2024-07-13 05:26:05.124421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.124457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.124659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.124694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.124846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.124888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.125082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.125115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.125293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.125328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.125577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.125634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.125806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.125914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.845 [2024-07-13 05:26:05.126099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.845 [2024-07-13 05:26:05.126132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.845 qpair failed and we were unable to recover it. 00:36:58.846 [2024-07-13 05:26:05.126334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.846 [2024-07-13 05:26:05.126371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.846 qpair failed and we were unable to recover it. 00:36:58.846 [2024-07-13 05:26:05.126568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.846 [2024-07-13 05:26:05.126625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.846 qpair failed and we were unable to recover it. 00:36:58.846 [2024-07-13 05:26:05.126807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.846 [2024-07-13 05:26:05.126838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.846 qpair failed and we were unable to recover it. 00:36:58.846 [2024-07-13 05:26:05.127028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.846 [2024-07-13 05:26:05.127060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.846 qpair failed and we were unable to recover it. 00:36:58.846 [2024-07-13 05:26:05.127205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.846 [2024-07-13 05:26:05.127241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.846 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.127403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.127435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.127568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.127599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.127792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.127824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.128007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.128048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.128243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.128302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.128503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.128696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.847 [2024-07-13 05:26:05.128729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.847 qpair failed and we were unable to recover it. 00:36:58.847 [2024-07-13 05:26:05.128863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.128923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.129110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.129146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.129355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.129388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.129522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.129554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.129759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.129794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.129973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.848 [2024-07-13 05:26:05.130010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.848 qpair failed and we were unable to recover it. 00:36:58.848 [2024-07-13 05:26:05.130221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.849 [2024-07-13 05:26:05.130258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.849 qpair failed and we were unable to recover it. 00:36:58.849 [2024-07-13 05:26:05.130444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.849 [2024-07-13 05:26:05.130477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.849 qpair failed and we were unable to recover it. 00:36:58.849 [2024-07-13 05:26:05.130655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.849 [2024-07-13 05:26:05.130691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.849 qpair failed and we were unable to recover it. 00:36:58.849 [2024-07-13 05:26:05.130873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.849 [2024-07-13 05:26:05.130908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.849 qpair failed and we were unable to recover it. 00:36:58.849 [2024-07-13 05:26:05.131090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.849 [2024-07-13 05:26:05.131126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.849 qpair failed and we were unable to recover it. 00:36:58.850 [2024-07-13 05:26:05.131316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.850 [2024-07-13 05:26:05.131348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.850 qpair failed and we were unable to recover it. 00:36:58.850 [2024-07-13 05:26:05.131521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.850 [2024-07-13 05:26:05.131557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.850 qpair failed and we were unable to recover it. 00:36:58.850 [2024-07-13 05:26:05.131739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.850 [2024-07-13 05:26:05.131771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.850 qpair failed and we were unable to recover it. 00:36:58.850 [2024-07-13 05:26:05.131930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.850 [2024-07-13 05:26:05.131980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.850 qpair failed and we were unable to recover it. 00:36:58.850 [2024-07-13 05:26:05.132187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.850 [2024-07-13 05:26:05.132219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.132422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.132457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.132649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.132696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.132869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.132918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.133078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.133110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.133267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.133304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.133464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.133500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.133674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.133709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.851 [2024-07-13 05:26:05.133903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.851 [2024-07-13 05:26:05.133936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.851 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.134090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.134122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.134351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.134410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.134610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.134647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.134803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.134836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.135021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.135058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.135278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.135331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.135505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.135541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.135714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.135746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.135923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.135959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.136130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.136166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.136319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.136362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.136569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.136601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.136794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.136830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.136993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.137041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.137184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.137220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.137369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.137401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.137563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.137595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.137746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.137796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.137975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.138013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.138214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.138247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.138426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.138461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.138680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.138712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.138902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.138939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.139108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.139140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.139300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.139350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.139510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.139547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.139721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.139768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.139944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.139977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.140178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.140213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.140429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.140461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.140622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.140654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.140842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.140881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.141023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.141054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.141218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.141250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.141416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.141452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.852 qpair failed and we were unable to recover it. 00:36:58.852 [2024-07-13 05:26:05.141657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.852 [2024-07-13 05:26:05.141690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.141902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.141938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.142139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.142172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.142373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.142408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.142589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.142621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.142793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.142828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.143030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.143064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.143219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.143255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.143419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.143450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.143643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.143675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.143895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.143930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.144117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.144154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.144335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.144368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.853 qpair failed and we were unable to recover it. 00:36:58.853 [2024-07-13 05:26:05.144521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.853 [2024-07-13 05:26:05.144557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.854 [2024-07-13 05:26:05.144748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.854 [2024-07-13 05:26:05.144780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.854 [2024-07-13 05:26:05.144944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.854 [2024-07-13 05:26:05.144976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.854 [2024-07-13 05:26:05.145177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.854 [2024-07-13 05:26:05.145210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.854 [2024-07-13 05:26:05.145412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.854 [2024-07-13 05:26:05.145453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.854 [2024-07-13 05:26:05.145638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.854 [2024-07-13 05:26:05.145674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.854 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.145847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.145891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.146056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.146088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.146227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.146260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.146448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.146485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.146648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.146681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.855 [2024-07-13 05:26:05.146840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.855 [2024-07-13 05:26:05.146879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.855 qpair failed and we were unable to recover it. 00:36:58.856 [2024-07-13 05:26:05.147092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.856 [2024-07-13 05:26:05.147127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.856 qpair failed and we were unable to recover it. 00:36:58.856 [2024-07-13 05:26:05.147311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.147343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.147477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.147509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.147736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.147897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.147933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.148104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.148139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.148329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.148361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.148525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.148557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.148729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.148765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.148980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.149013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.149196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.149232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.149395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.857 [2024-07-13 05:26:05.149426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.857 qpair failed and we were unable to recover it. 00:36:58.857 [2024-07-13 05:26:05.149590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.149640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.149819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.150054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.150087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.150223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.150254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.150412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.150444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.150583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.150616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.150812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.150845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.151012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.151046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.151195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.858 [2024-07-13 05:26:05.151230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.858 qpair failed and we were unable to recover it. 00:36:58.858 [2024-07-13 05:26:05.151393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.151425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.151589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.151622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.151779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.151812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.152021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.152058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.152267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.152300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.152461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.152493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.152717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.152749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.152887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.859 [2024-07-13 05:26:05.152921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.859 qpair failed and we were unable to recover it. 00:36:58.859 [2024-07-13 05:26:05.153085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.153136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.153318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.153368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.153523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.153555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.153764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.153805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.153999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.154036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.154220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.154253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.154414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.154447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.154624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.154840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.154879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.155040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.860 [2024-07-13 05:26:05.155091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.860 qpair failed and we were unable to recover it. 00:36:58.860 [2024-07-13 05:26:05.155291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.155323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.155504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.155541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.155686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.155722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.155893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.155930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.156114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.156339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.156372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.156585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.156622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.156804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.861 [2024-07-13 05:26:05.156840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.861 qpair failed and we were unable to recover it. 00:36:58.861 [2024-07-13 05:26:05.157028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.157062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.157237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.157273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.157463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.157495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.157695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.157745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.157937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.157970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.158171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.158207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.158460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.862 [2024-07-13 05:26:05.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.862 qpair failed and we were unable to recover it. 00:36:58.862 [2024-07-13 05:26:05.158625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.158658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.158794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.158828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.158994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.159184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.159216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.159412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.159448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.159596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.159629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.159803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.159839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.159998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.160035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.160225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.160258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.160422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.160456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.160640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.160676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.863 qpair failed and we were unable to recover it. 00:36:58.863 [2024-07-13 05:26:05.160879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.863 [2024-07-13 05:26:05.160916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.864 qpair failed and we were unable to recover it. 00:36:58.864 [2024-07-13 05:26:05.161056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.864 [2024-07-13 05:26:05.161092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.864 qpair failed and we were unable to recover it. 00:36:58.864 [2024-07-13 05:26:05.161251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.864 [2024-07-13 05:26:05.161283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.864 qpair failed and we were unable to recover it. 00:36:58.864 [2024-07-13 05:26:05.161485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.864 [2024-07-13 05:26:05.161520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.864 qpair failed and we were unable to recover it. 00:36:58.864 [2024-07-13 05:26:05.161697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.864 [2024-07-13 05:26:05.161734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.864 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.161891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.161929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.162094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.162127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.162306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.162348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.162562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.162595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.162725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.162758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.162891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.162925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.163062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.163114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.163270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.163308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.163511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.163550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.865 qpair failed and we were unable to recover it. 00:36:58.865 [2024-07-13 05:26:05.163715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.865 [2024-07-13 05:26:05.163748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.163925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.163962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.164163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.164199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.164337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.164374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.164551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.164583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.164789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.164825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.164984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.165020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.165202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.165240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.165417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.165449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.165613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.165646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.165809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.165858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.166067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.166308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.166340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.166530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.166566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.166755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.166792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.166977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.167021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.167205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.167237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.167412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.167447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.167623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.167659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.167834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.167876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.168037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.168071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.168257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.168292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.168522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.168580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.168781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.168817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.169002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.169034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.169211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.169245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.169478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.169535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.169720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.169756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.169944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.169977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.170139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.170173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.170333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.170364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.170514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.170546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.170684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.170716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.170897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.170938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.171184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.171241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.171418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.171454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.171637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.171669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.171832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.866 [2024-07-13 05:26:05.171864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.866 qpair failed and we were unable to recover it. 00:36:58.866 [2024-07-13 05:26:05.172030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.172079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.172229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.172265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.172449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.172480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.172656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.172692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.172853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.172893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.173057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.173090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.173247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.173279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.173457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.173494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.173641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.173677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.173846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.173886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.174022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.174054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.174211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.174261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.174444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.174480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.174626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.174661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.174900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.174951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.175089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.175122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.175315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.175351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.175496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.175532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.175683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.175715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.175893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.175929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.176106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.176142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.176345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.176378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.176570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.176603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.176786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.176822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.177032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.177068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.177222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.177257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.177415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.177448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.177626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.177661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.177879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.177912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.178100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.178132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.178303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.178336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.178467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.178500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.178678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.178714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.178888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.178924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.179130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.179162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.179317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.179352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.179581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.179639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.179840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.179883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.180096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.180129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.180351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.180383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.180586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.180623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.180800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.180847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.181035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.181068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.181239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.181275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.181502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.181539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.181713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.181749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.181932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.181966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.182096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.182129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.182313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.182365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.182572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.867 [2024-07-13 05:26:05.182609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.867 qpair failed and we were unable to recover it. 00:36:58.867 [2024-07-13 05:26:05.182793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.182826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.182981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.183202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.183235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.183387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.183419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.183578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.183610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.183758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.183794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.183983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.184017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.184203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.184239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.184414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.184447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.184634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.184670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.184845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.184887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.185034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.185070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.185255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.185292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.185485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.185534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.185679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.185715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.185874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.185911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.186091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.186123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.186328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.186365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.186536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.186572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.186758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.186791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.186932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.186965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.187135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.187171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.187422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.187455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.187638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.187671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.187863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.187902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.188085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.188122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.188323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.188388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.188563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.188600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.188743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.188776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.188930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.188980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.189224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.189283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.189452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.189488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.189648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.189681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.189870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.189907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.190097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.190129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.190285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.190318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.190477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.190509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.190645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.190678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.190888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.190930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.191111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.191148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.191359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.191391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.191570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.191606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.191783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.191820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.192016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.868 [2024-07-13 05:26:05.192049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.868 qpair failed and we were unable to recover it. 00:36:58.868 [2024-07-13 05:26:05.192208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.192241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.192418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.192453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.192665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.192698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.192829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.192889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.193099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.193267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.193301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.193464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.193658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.193694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.193882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.193939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.194077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.194109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.194279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.194312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.194467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.194509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.194672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.194831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.194864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.195081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.195117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.195292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.195329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.195481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.195513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.195641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.195690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.195874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.195911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.196085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.196122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.196301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.196333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.196514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.196550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.196729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.196766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.196925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.196963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.197147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.197180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.197333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.197369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.197538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.197575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.197774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.197810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.197977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.198010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.198210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.198246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.198499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.198565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.198732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.198765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.198952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.198985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.199167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.199204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.199452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.199512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.199727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.199761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.199894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.200086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.200134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.200419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.200478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.200656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.200693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.200875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.200909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.201114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.201149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.201350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.201414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.201572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.201607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.201791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.201823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.202030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.202067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.202305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.202342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.202492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.202529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.202709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.202746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.202952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.202989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.203174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.203209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.203363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.203398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.203592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.203625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.203784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.203820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.204011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.204045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.869 qpair failed and we were unable to recover it. 00:36:58.869 [2024-07-13 05:26:05.204205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.869 [2024-07-13 05:26:05.204257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.204410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.204443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.204618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.204654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.204859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.204918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.205055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.205087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.205239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.205270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.205438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.205475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.205626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.205663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.205809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.205844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.206026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.206058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.206192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.206243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.206415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.206451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.206619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.206655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.206838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.206883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.207093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.207129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.207263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.207299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.207479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.207515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.207696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.207728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.207935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.207972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.208129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.208165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.208383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.208426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.208607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.208639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.208822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.208858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.209022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.209058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.209201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.209236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.209444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.209476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.209635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.209825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.209883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.210047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.210080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.210254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.210287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.210465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.210501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.210701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.210737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.210909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.210947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.211129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.211165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.211313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.211349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.211551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.211584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.211722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.211934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.211966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.212145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.212181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.212418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.212474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.212661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.212698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.212886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.212919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.213095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.213131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.213321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.213377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.213535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.213567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.213768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.213804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.213978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.214012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.214197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.214287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.214464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.214500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.214680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.214712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.214890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.214927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.215073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.870 [2024-07-13 05:26:05.215109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.870 qpair failed and we were unable to recover it. 00:36:58.870 [2024-07-13 05:26:05.215287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.215322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.215532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.215564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.215737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.215773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.215951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.215988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.216165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.216400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.216433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.216570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.216602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.216760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.216791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.217004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.217038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.217204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.217236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.217415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.217451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.217653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.217689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.217842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.217884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.218043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.218075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.218243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.218294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.218520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.218557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.218734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.218769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.218990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.219023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.219183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.219219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.219412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.219445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.219625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.219661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.219815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.219853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.220017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.220053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.220267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.220300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.220501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.220538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.220683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.220715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.220893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.220930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.221082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.221117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.221262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.221297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.221468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.221501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.871 qpair failed and we were unable to recover it. 00:36:58.871 [2024-07-13 05:26:05.221635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.871 [2024-07-13 05:26:05.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.221834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.221892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.222095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.222140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.222346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.222379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.222560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.222596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.222746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.222784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.222953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.222986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.223167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.223199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.223374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.223409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.223592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.223625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.223762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.223810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.223996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.224029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.224199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.224234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.224477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.224532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.224737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.224772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.224926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.224958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.225134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.225171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.225392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.225451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.225642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.225679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.225859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.225897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.226054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.226087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.226289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.226348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.226568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.226760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.226793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.226949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.227186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.227251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.227431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.227467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.227618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.227651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.227776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.227826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.228000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.228037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.228204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.228241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.228418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.228625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.228661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.228807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.228844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.229046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.229082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.229252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.229285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.229454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.229491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.229646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.229681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.229840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.229878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.230066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.230099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.230309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.230345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.230526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.230558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.230689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.230723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.230953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.230986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.231148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.231364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.231398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.231538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.231571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.231771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.231804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.231972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.232009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.232179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.232216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.232430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.232463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.232590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.232622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.232830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.232874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.872 qpair failed and we were unable to recover it. 00:36:58.872 [2024-07-13 05:26:05.233023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.872 [2024-07-13 05:26:05.233059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.233246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.233281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.233465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.233498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.233659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.233692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.233878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.233911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.234092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.234129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.234291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.234323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.234474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.234506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.234681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.234717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.234936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.234969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.235157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.235190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.235370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.235406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.235588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.235625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.235798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.235846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.236017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.236050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.236209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.236242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.236429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.236465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.236648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.236681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.236870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.236908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.237123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.237159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.237314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.237351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.237529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.237564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.237769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.237802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.237938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.237972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.238136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.238168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.238361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.238393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.238565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.238598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.238803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.238839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.239043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.239077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.239227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.239263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.239466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.239499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.239651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.239687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.239882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.239915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.240105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.240137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.240340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.240372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.240530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.240563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.240776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.240809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.240942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.240974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.241158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.241190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.241343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.241376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.241510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.241542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.241704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.241755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.241919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.241952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.242092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.242124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.242259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.242292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.242473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.242510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.242718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.242750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.242930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.242966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.243118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.243154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.243299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.243335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.243508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.243539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.243745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.243781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.244023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.244059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.244208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.244244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.244414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.873 [2024-07-13 05:26:05.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.873 qpair failed and we were unable to recover it. 00:36:58.873 [2024-07-13 05:26:05.244629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.244666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.244861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.244902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.245106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.245142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.245303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.245339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.245502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.245725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.245758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.245953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.245990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.246174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.246207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.246369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.246405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.246559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.246595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.246750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.246784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.246948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.246982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.247166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.247201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.247384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.247441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.247615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.247651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.247826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.247858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.248039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.248255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.248291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.248477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.248513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.248728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.248760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.248942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.248980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.249279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.249337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.249540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.249588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.249753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.249785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.249926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.249959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.250142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.250178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.250393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.250428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.250601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.250633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.250815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.250881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.251070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.251104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.251247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.251281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.251444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.251477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.251632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.251667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.251880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.251914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.252052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.252105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.252255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.252288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.252453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.252485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.252617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.252666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.252835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.252879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.253056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.253089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.253290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.253324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.253480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.253529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.253690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.253725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.253931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.253969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.254119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.254156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.254348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.254381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.254537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.254569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.254726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.254758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.254940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.254977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.874 [2024-07-13 05:26:05.255148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.874 [2024-07-13 05:26:05.255185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.874 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.255360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.255396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.255578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.255610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.255740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.255773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.255907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.255942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.256126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.256162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.256341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.256374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.256561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.256597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.256780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.256815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.256981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.257013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.257142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.257174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.257377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.257413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.257586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.257622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.257774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.257810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.258003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.258036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.258256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.258292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.258587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.258646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.258822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.258858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.259049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.259080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.259226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.259261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.259475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.259507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.259652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.259685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.259836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.259876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.260027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.260062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.260215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.260251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.260423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.260460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.260614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.260646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.260853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.260897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.261082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.261286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.261323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.261501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.261533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.261707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.261743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.261883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.261934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.262097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.262129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.262330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.262367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.262507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.262540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.262704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.262756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.262931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.262977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.263158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.263191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.263326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.263359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.263490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.263523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.263678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.875 [2024-07-13 05:26:05.263710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.875 qpair failed and we were unable to recover it. 00:36:58.875 [2024-07-13 05:26:05.263895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.263928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.264103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.264139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.264320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.264357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.264532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.264586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.264752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.264788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.264964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.265003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.265152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.265187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.265371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.265406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.265566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.265599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.265789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.265825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.265987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.266020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.266197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.266232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.266378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.266410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.266600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.266633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.266810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.266845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.267047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.267080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.267267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.267299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.267452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.267501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.267660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.267710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.267933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.267966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.268100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.268131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.268290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.268326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.268525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.268765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.268801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.268995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.269027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.269164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.269195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.269365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.269397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.269568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.269605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.269806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.269842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.270012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.270044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.270182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.270214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.270373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.270421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.270610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.270647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.270830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.271067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.271099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.271276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.271312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.271526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.271557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.271712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.271749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.271949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.271982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.272118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.272151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.272311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.272344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.272480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.272511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.272646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.272678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.272850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.272893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.273070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.273103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.273283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.273318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.273469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.876 qpair failed and we were unable to recover it. 00:36:58.876 [2024-07-13 05:26:05.273695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.876 [2024-07-13 05:26:05.273731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.273902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.273935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.274087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.274121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.274261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.274293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.274496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.274529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.274734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.274770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.274949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.274982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.275129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.275179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.275383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.275419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.275597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.275630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.275805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.275841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.276038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.276071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.276274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.276311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.276470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.276503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.276669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.276720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.276938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.276971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.277105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.277138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.277331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.277364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.277541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.277577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.277729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.277765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.277952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.277985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.278115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.278147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.278355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.278391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.278540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.278577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.278725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.278773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.278983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.279020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.279207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.279243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.279399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.279436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.279639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.279675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.279892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.279926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.280089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.280126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.280278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.280314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.280462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.280499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.280704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.280737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.280898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.281092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.281124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.281330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.281366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.281556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.281588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.281791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.281828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.281989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.282026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.877 [2024-07-13 05:26:05.282207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.877 [2024-07-13 05:26:05.282239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.877 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.282396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.282429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.282608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.282644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.282796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.282832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.282999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.283033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.283192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.283226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.283408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.283588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.283624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.283823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.283860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.284044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.284076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.284226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.284263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.284430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.284466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.284684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.284717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.284878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.284911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.285064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.285100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.285250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.285287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.285461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.285498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.285709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.285741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.285911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.285948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.286117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.286154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.286325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.286360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.286564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.286597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.286737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.286770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.286903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.286936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.287123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.287159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.287367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.287399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.287560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.287614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.287753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.287786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.287970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.288007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.288197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.288229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.288389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.288422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.288607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.288639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.288820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.288856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.289015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.289048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.289248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.289284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.289530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.289589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.289764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.289801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.289970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.290003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.290189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.290222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.290468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.290504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.878 qpair failed and we were unable to recover it. 00:36:58.878 [2024-07-13 05:26:05.290649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.878 [2024-07-13 05:26:05.290686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.290840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.290891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.291096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.291132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.291383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.291416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.291582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.291634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.291839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.291885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.292066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.292099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.292298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.292335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.292522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.292565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.292759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.292792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.292958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.292992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.293118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.293151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.293317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.293354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.293546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.293579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.293709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.293760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.293941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.293990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.294150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.294183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.294342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.294375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.294537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.294575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.294713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.294745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.294927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.294961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.295091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.295125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.295306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.295339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.295500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.295533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.295705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.295738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.295931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.295964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.296126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.296159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.296350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.296382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.296549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.296582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.296717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.296750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.296930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.296964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.297128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.297161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.297319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.297352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.297485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.297518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.297674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.297707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.297913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.297947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.298139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.298176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.298362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.298395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.298560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.298593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.298755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.298794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.298956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.298992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.299160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.299193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.299335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.879 [2024-07-13 05:26:05.299370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.879 qpair failed and we were unable to recover it. 00:36:58.879 [2024-07-13 05:26:05.299538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.299572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.299739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.299771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.299908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.299947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.300105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.300140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.300310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.300347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.300522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.300716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.300762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.300932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.300967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.301132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.301166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.301356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.301394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.301549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.301582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.301767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.301805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.301973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.302008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:58.880 [2024-07-13 05:26:05.302140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.880 [2024-07-13 05:26:05.302173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:58.880 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.302339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.302371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.302566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.302601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.302738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.302771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.302899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.302933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.303114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.303147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.303307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.303340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.303508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.303541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.303687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.303720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.303887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.303921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.304140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.304177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.304333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.304366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.304521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.304553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.304717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.304750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.304923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.304958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.305128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.305161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.305319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.305362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.305552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.305586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.305724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.305756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.305921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.305956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.306083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.306116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.306302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.306335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.306492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.306526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.306717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.306750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.306905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.306939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.307122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.307155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.307287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.307459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.307491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.307637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.307673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.307956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.307989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.308145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.308177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.308307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.308341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.308471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.308505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.308678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.308710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.308880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.309046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.309078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.309268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.309323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.309488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.309522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.309680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.309712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.309900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.309937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.310109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.310146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.310297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.310329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.310489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.310522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.310660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.310693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.310823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.310856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.311928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.311962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.312152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.312185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.312349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.312385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.312546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.312579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.312740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.312792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.312975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.313146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.313332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.313541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.313743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.313938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.313971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.161 [2024-07-13 05:26:05.314126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.161 [2024-07-13 05:26:05.314159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.161 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.314314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.314347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.314513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.314550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.314723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.314760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.314919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.314953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.315136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.315169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.315437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.315494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.315665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.315701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.315856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.316104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.316137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.316276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.316309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.316487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.316698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.316731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.316934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.316968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.317127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.317160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.317283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.317319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.317488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.317520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.317702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.317735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.317958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.317997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.318127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.318170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.318345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.318378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.318581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.318618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.318789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.318826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.318999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.319032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.319179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.319213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.319389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.319426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.319599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.319635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.319834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.319872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.320912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.320945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.321106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.321139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.321320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.321352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.321546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.321579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.321746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.321778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.321943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.321976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.322119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.322153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.322387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.322420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.322552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.322755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.322806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.323012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.323046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.323242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.323275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.323433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.323466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.323601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.323634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.323801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.323834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.324000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.324034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.324217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.324250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.324473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.324532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.324734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.324770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.324952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.324986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.325152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.325184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.325320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.325353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.325498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.325535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.325696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.325729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.325886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.325920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.326081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.326114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.326247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.326280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.326441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.326474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.326644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.326678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.326828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.326861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.327051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.327084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.327267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.327299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.327509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.327708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.162 [2024-07-13 05:26:05.327742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.162 qpair failed and we were unable to recover it. 00:36:59.162 [2024-07-13 05:26:05.327903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.328101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.328134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.328322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.328355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.328537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.328586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.328728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.328765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.328946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.328980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.329175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.329208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.329342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.329375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.329505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.329538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.329694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.329727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.329915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.329949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.330122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.330154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.330321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.330353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.330522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.330554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.330711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.330744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.330926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.330960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.331156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.331217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.331351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.331384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.331570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.331605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.331760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.331793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.331967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.332000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.332166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.332199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.332384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.332417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.332585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.332619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.332782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.332815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.332985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.333153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.333348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.333535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.333749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.333956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.333989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.334152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.334186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.334324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.334357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.334526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.334721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.334754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.334984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.335208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.335407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.335568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.335748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.335940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.335973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.336189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.336222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.336425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.336458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.336594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.336627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.336764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.336796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.336979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.337013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.337201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.337238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.337464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.337521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.337771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.337807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.337995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.338038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.338220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.338256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.338456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.338488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.338680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.338713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.338940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.338973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.339106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.339139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.339289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.339322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.339508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.339545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.339782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.339818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.339997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.163 [2024-07-13 05:26:05.340031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.163 qpair failed and we were unable to recover it. 00:36:59.163 [2024-07-13 05:26:05.340198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.340230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.340426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.340459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.340584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.340618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.340804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.340837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.341008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.341041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.341221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.341257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.341438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.341470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.341634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.341668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.341856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.341905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.342090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.342142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.342326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.342359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.342555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.342588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.342758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.342790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.343018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.343052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.343215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.343248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.343410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.343443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.343607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.343643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.343851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.343897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.344165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.344197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.344350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.344383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.344655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.344904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.344952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.345113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.345146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.345311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.345343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.345484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.345517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.345690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.345741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.345910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.345944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.346106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.346139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.346311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.346376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.346590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.346622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.346809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.346843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.347946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.347983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.348138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.348171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.348330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.348363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.348500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.348532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.348708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.348741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.348910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.348944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.349104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.349136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.349291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.349324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.349482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.349515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.349672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.349705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.349889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.349923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.350077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.350109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.350263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.350296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.350454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.350493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.350662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.350695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.350881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.350918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.351084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.351117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.351278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.351310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.351526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.351559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.351714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.351746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.351935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.351973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.352145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.352338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.352371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.352532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.352565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.352733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.352767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.352927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.353122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.353154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.353329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.353362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.353558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.353591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.164 [2024-07-13 05:26:05.353817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-13 05:26:05.353854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.164 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.354051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.354089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.354244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.354285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.354435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.354471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.354620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.354655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.354847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.354885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.355045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.355078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.355238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.355270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.355433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.355468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.355653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.355686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.355850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.355889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.356066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:36:59.165 [2024-07-13 05:26:05.356266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.356331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.356498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.356533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.356679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.356711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.356878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.356912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.357065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.357098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.357238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.357271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.357406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.357441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.357577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.357611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.357774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.357825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.358002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.358034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.358186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.358219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.358381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.358414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.358596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.358629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.358797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.358835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.359944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.359977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.360148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.360181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.360324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.360492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.360523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.360678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.360713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.360970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.361004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.361141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.361174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.361336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.361384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.361553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.361589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.361784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.361816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.361982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.362014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.362200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.362237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.362416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.362448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.362642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.362679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.362840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.362882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.363043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.363074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.363210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.363241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.363405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.363437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.363595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.363626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.363811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.363861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.364054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.364087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.364282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.364315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.364495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.364531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.364678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.364714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.364882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.364914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.365093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.365125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.365321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.365357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.365627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.365660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.365877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.366120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.366170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.366329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.366361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.366511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.366560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.366715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.366751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.366920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.366953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.367097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.367129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.367253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.367289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.367445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.367477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.367662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-13 05:26:05.367694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.165 qpair failed and we were unable to recover it. 00:36:59.165 [2024-07-13 05:26:05.367862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.367919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.368078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.368111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.368325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.368361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.368542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.368578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.368727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.368759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.368889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.368921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.369126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.369193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.369364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.369399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.369605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.369642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.369788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.369841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.370063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.370096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.370289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.370325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.370519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.370577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.370782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.370815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.370989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.371022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.371198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.371252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.371483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.371518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.371695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.371732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.371934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.372105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.372137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.372365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.372401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.372587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.372624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.372812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.372845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.373023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.373057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.373221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.373310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.373514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.373547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.373705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.373742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.373936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.373970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.374104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.374154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.374316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.374348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.374481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.374513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.374706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.374738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.374941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.374974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.375161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.375194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.375336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.375369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.375555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.375590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.375777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.375816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.376015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.376048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.376211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.376244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.376400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.376431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.376565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.376598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.376768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.376804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.377000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.377048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.377225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.377260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.377446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.377483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.377675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.377707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.377881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.377914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.378073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.378105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.378329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.378365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.378571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.378610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.378768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.378818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.378993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.379026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.379210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.379243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.379418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.379450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.379611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.379643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.379808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.379840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.380023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.380056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.380223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.380255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.380419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.380463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.380660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.380696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.380939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.380982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.166 [2024-07-13 05:26:05.381162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-13 05:26:05.381211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.166 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.381399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.381443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.381617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.381650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.381831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.381863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.382075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.382108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.382305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.382341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.382499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.382531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.382742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.382779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.383003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.383036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.383207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.383239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.383483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.383519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.383738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.383772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.383925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.383958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.384110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.384166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.384313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.384348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.384508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.384746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.384781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.384984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.385017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.385257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.385290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.385485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.385522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.385691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.385727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.385941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.385974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.386133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.386183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.386362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.386398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.386619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.386651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.386816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.386858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.387002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.387034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.387204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.387236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.387458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.387495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.387655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.387687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.387879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.387912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.388104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.388332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.388368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.388515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.388548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.388685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.388718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.388886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.388920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.389082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.389114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.389266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.389308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.389448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.389480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.389668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.389700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.389882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.389933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.390099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.390131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.390303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.390337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.390477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.390529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.390714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.390746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.390900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.390934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.391127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.391176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.391389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.391446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.391623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.391655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.391834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.391877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.392058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.392106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.392303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.392337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.392512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.392548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.392749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.392810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.393003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.393037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.393198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.393235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.393409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.393442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.393625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.393657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.393814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.393849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.394092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.394140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.394377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.394412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.394588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.394625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.394806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.394842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.395007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.395039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.395220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.395258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.395489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.395548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.395734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.395766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.395946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-13 05:26:05.395979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.167 qpair failed and we were unable to recover it. 00:36:59.167 [2024-07-13 05:26:05.396124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.396181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.396377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.396410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.396568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.396601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.396765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.396798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.397025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.397059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.397207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.397243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.397410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.397447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.397630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.397664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.397841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.397886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.398034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.398067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.398232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.398265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.398445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.398481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.398740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.398799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.399009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.399042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.399239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.399276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.399501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.399538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.399686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.399719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.399900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.399953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.400137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.400170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.400392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.400425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.400627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.400663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.400835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.400876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.401086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.401119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.401265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.401489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.401553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.401769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.401802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.401965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.401998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.402143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.402176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.402335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.402367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.402547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.402583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.402760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.402796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.402970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.403135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.403335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.403550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.403719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.403890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.403924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.404060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.404103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.404351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.404387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.404593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.404630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.404809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.404846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.405014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.405047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.405228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.405279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.405465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.405497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.405672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.405708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.405873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.405908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.406075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.406108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.406291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.406329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.406495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.406532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.406689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.406721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.406861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.406900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.407115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.407184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.407378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.407413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.407630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.407667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.407810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.407847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.408046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.408079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.408262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.408298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.408525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.408582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.408737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.408769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.408952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-13 05:26:05.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.168 qpair failed and we were unable to recover it. 00:36:59.168 [2024-07-13 05:26:05.409196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.409231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.409444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.409476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.409687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.409723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.409950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.409984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.410148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.410181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.410364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.410400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.410561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.410597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.410793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.410826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.411023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.411055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.411212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.411248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.411453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.411486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.411643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.411679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.411862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.411903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.412081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.412113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.412317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.412353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.412620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.412656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.412859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.412898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.413057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.413098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.413261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.413297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.413460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.413492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.413664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.413701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.413911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.414129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.414161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.414323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.414360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.414562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.414598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.414779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.414812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.414958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.414990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.415159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.415215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.415433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.415469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.415650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.415687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.415863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.415922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.416059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.416092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.416274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.416306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.416626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.416685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.416943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.416977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.417183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.417219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.417451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.417511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.417721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.417754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.417940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.417977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.418153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.418189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.418376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.418409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.418572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.418604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.418806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.418842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.419040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.419074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.419222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.419255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.419417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.419450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.419632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.419665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.419825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.419862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.420103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.420151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.420334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.420369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.420538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.420590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.420772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.420807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.421026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.421060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.421243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.421279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.421551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.421619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.421778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.421811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.421987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.422020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.422183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.422232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.422422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.422454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.422617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.422649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.422836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.422886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.423077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.423110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.423292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.423329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.423502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.423563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.423820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.423853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.424035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.424216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-13 05:26:05.424253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.169 qpair failed and we were unable to recover it. 00:36:59.169 [2024-07-13 05:26:05.424433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.424466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.424715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.424751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.424941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.424974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.425132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.425164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.425363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.425399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.425665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.425726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.425905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.425938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.426124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.426160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.426357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.426390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.426561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.426593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.426770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.426805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.426967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.427000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.427132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.427164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.427351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.427384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.427570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.427606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.427844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.427987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.428019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.428194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.428243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.428458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.428493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.428699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.428736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.428922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.428960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.429144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.429176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.429384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.429420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.429665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.429722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.429883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.429916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.430044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.430094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.430257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.430289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.430477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.430508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.430689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.430725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.430897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.430934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.431123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.431156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.431411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.431447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.431629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.431665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.431818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.431855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.431993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.432026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.432209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.432242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.432447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.432480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.432657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.432693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.432856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.432900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.433055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.433088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.433248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.433280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.433483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.433519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.433671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.433703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.433902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.433939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.434198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.434234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.434440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.434472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.434658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.434694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.434937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.434974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.435161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.435193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.435332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.435365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.435531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.435581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.435765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.435798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.435978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.436015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.436203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.436235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.436396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.436439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.436649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.436681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.436863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.436906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.437073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.437106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.437241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.437273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.437400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.437432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.437692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.437725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.437892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.437929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.438106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.438142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.438334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.438367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.438575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.438612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.170 [2024-07-13 05:26:05.438784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.170 [2024-07-13 05:26:05.438819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.170 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.438982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.439015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.439192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.439227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.439490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.439546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.439741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.439773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.439947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.439983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.440189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.440222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.440406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.440439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.440613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.440654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.440805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.440841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.441052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.441084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.441231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.441264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.441519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.441758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.441790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.441971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.442007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.442173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.442210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.442421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.442454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.442613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.442649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.442822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.442857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.443047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.443079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.443299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.443331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.443491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.443541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.443732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.443764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.443969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.444006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.444225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.444279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.444474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.444508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.444761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.444798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.444997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.445031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.445189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.445222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.445404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.445441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.445592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.445629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.445811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.445850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.446014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.446047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.446309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.446358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.446506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.446541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.446728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.446765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.446952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.446989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.447175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.447208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.447386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.447422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.447617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.447676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.447864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.447905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.448093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.448126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.448365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.448423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.448630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.448662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.448824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.448860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.449077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.449110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.449270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.449303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.449477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.449514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.449720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.449762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.449942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.449975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.450118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.450155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.450330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.450366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.450520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.450737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.450770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.450987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.451024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.451192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.451226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.451411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.451459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.451639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.451675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.451847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.451886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.452051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.452084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.452254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.452308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.171 [2024-07-13 05:26:05.452513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.171 [2024-07-13 05:26:05.452548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.171 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.452723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.452762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.452985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.453019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.453176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.453213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.453374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.453411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.453601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.453663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.453847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.453894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.454041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.454073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.454256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.454293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.454501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.454533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.454667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.454699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.454840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.454881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.455060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.455093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.455335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.455367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.455569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.455618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.455762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.455798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.455948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.455983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.456151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.456184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.456322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.456356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.456518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.456552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.456710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.456743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.456905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.456938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.457133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.457184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.457441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.457500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.457725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.457769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.457976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.458010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.458152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.458208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.458367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.458404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.458547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.458609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.458810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.458846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.459048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.459081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.459283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.459319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.459495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.459531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.459716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.459748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.459916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.459969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.460106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.460139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.460301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.460333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.460516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.460551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.460760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.460960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.460993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.461151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.461191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.461385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.461421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.461613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.461645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.461815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.461848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.462057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.462091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.462262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.462294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.462491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.462541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.462697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.462733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.462941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.462975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.463114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.463311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.463343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.463504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.463536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.463742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.463779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.463985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.464019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.464191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.464228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.464443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.464479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.464664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.464701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.464870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.464915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.465085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.465118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.465322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.465358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.465541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.465573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.465730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.465765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.465961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.465995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.466192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.466224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.466371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.466407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.466619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.466655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.466824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.466858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.172 [2024-07-13 05:26:05.467036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.172 [2024-07-13 05:26:05.467069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.172 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.467244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.467281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.467461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.467495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.467703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.467739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.467900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.467944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.468151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.468185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.468383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.468416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.468623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.468813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.468845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.468989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.469022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.469175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.469213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.469368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.469400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.469544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.469577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.469763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.469795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.469986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.470019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.470213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.470261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.470424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.470456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.470614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.470646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.470770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.470802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.470990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.471023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.471211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.471243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.471384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.471417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.471636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.471672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.471893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.471932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.472109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.472145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.472335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.472382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.472565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.472597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.472783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.472824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.473022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.473054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.473216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.473249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.473489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.473671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.473707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.473863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.473903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.474074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.474110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.474328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.474365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.474582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.474615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.474803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.474839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.475044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.475077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.475266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.475299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.475495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.475531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.475707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.475743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.475920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.475954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.476106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.476141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.476344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.476377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.476563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.476595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.476813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.476849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.477125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.477161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.477385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.477417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.477626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.477662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.477806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.477843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.478018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.478050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.478184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.478241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.478418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.478456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.478633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.478665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.478812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.478845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.479064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.479253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.479454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.479662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.479826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.479995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.480027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.480157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.480208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.480416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.480448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.480609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.480642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.480800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.480853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.481048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.481081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.481267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.481303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.481517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.481559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.173 qpair failed and we were unable to recover it. 00:36:59.173 [2024-07-13 05:26:05.481749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.173 [2024-07-13 05:26:05.481782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.481924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.481957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.482214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.482250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.482469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.482502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.482704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.482745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.482937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.482969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.483138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.483180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.483372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.483408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.483622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.483654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.483837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.483877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.484134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.484180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.484406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.484442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.484654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.484877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.484914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.485101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.485133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.485326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.485359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.485542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.485578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.485790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.485822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.486003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.486036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.486212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.486264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.486406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.486442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.486630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.486672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.486836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.486876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.487055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.487088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.487229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.487261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.487445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.487481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.487733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.487766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.487956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.487989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.488165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.488207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.488356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.488392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.488569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.488601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.488761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.488797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.489008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.489045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.489230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.489262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.489444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.489480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.489640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.489673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.489843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.489881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.490072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.490108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.490279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.490314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.490533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.490569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.490755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.490791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.490938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.490975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.491124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.491156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.491312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.491345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.491548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.491584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.491758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.491790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.491996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.492033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.492206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.492242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.492395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.492427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.492677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.492713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.492888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.492934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.493118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.493151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.493329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.493365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.493540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.493576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.493751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.493783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.493975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.494012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.494169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.494204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.494448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.494480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.494647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.494683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.494937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.494974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.495151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.174 [2024-07-13 05:26:05.495321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.174 [2024-07-13 05:26:05.495354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.174 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.495489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.495522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.495716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.495749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.495934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.495971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.496167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.496203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.496396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.496428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.496619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.496652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.496831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.496874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.497064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.497096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.497298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.497333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.497583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.497616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.497778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.497811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.498000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.498036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.498234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.498270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.498436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.498468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.498637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.498672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.498838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.498876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.499045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.499077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.499302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.499341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.499507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.499539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.499734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.499766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.499959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.499997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.500170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.500205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.500421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.500452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.500646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.500687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.500825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.500858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.501044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.501077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.501256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.501291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.501492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.501528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.501716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.501748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.501922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.501955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.502153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.502195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.502360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.502393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.502555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.502587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.502746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.502782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.502994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.503026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.503213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.503249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.503419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.503634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.503666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.503826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.503858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.504049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.504085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.504271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.504303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.504475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.504508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.504684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.504720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.504861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.504900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.505048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.505097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.505284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.505320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.505504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.505536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.505716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.505748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.505901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.505952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.506133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.506166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.506346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.506382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.506533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.506568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.506722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.506754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.506951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.506989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.507145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.507181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.507369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.507401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.507561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.507594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.507765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.507819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.508010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.508043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.508223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.508259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.508435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.508470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.508624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.508657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.508852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.508897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.509099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.509299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.509332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.509480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.509517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.509720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.509755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.175 [2024-07-13 05:26:05.509923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.175 [2024-07-13 05:26:05.509957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.175 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.510111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.510144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.510293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.510326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.510534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.510566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.510713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.510745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.510878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.510912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.511076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.511109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.511279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.511312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.511467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.511503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.511746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.511778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.511943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.511981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.512154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.512189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.512365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.512397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.512574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.512609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.512799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.512831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.512972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.513006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.513188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.513224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.513409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.513445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.513640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.513672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.513884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.513927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.514074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.514121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.514304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.514336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.514523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.514559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.514712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.514749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.514923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.514956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.515099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.515131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.515304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.515337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.515521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.515553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.515734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.515769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.515946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.515983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.516163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.516204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.516357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.516393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.516646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.516681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.516840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.516890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.517051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.517088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.517266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.517302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.517462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.517495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.517696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.517732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.517927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.517961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.518095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.518127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.518284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.518317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.518478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.518510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.518684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.518717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.518933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.518970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.519192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.519224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.519356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.519388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.519593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.519629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.519796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.519828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.519997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.520030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.520165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.520197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.520362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.520394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.176 [2024-07-13 05:26:05.520632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.176 [2024-07-13 05:26:05.520665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.176 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.520823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.520859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.521089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.521121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.521262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.521295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.521533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.521565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.521739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.521774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.521973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.522007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.522193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.522229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.522378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.522415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.522564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.522596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.522776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.522812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.523031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.523068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.523313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.523345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.523523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.523559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.523762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.523798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.523964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.523997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.524130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.524178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.524325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.524361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.524528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.524560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.524723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.524778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.524944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.524980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.525188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.525221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.525419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.525455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.525615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.525651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.525828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.525861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.526013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.526049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.526250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.526286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.526502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.526535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.526703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.526739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.526923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.526956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.527105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.527137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.527329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.527362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.527552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.527588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.527747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.527812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.528016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.528052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.528206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.528242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.528418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.528450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.528605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.528637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.528821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.528890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.529089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.529121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.529293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.529329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.529502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.529538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.529708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.529741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.529923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.529960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.530107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.530142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.530296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.530328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.530514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.530551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.530699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.530735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.530914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.530947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.531074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.531106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.531254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.531289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.531485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.531518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.531699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.531734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.531939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.531976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.532235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.532267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.532493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.532526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.532731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.532768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.532943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.532975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.533219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.533255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.533435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.533476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.533630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.533663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.533845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.533890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.534094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.534130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.534380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.534413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.534605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.534641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.534815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.534850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.535015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.535048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.535251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.535287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.177 qpair failed and we were unable to recover it. 00:36:59.177 [2024-07-13 05:26:05.535441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.177 [2024-07-13 05:26:05.535476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.535657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.535689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.535874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.535911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.536115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.536151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.536342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.536375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.536562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.536598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.536742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.536778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.536971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.537004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.537156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.537188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.537362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.537398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.537576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.537608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.537760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.537796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.538015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.538051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.538206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.538239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.538420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.538452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.538644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.538680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.538880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.538913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.539093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.539128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.539334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.539371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.539555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.539588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.539780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.539815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.540022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.540055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.540243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.540275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.540429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.540465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.540642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.540678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.540885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.540918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.541089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.541124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.541277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.541313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.541455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.541487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.541628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.541671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.541858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.541902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.542100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.542137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.542331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.542364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.542522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.542570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.542720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.542751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.542893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.542927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.543091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.543124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.543277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.543309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.543502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.543535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.543694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.543727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.543885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.543918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.544104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.544137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.544303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.544341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.544480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.544513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.544641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.544691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.544904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.544941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.545098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.545137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.545310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.545342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.545479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.545511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.545666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.545698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.545885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.545922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.546140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.546176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.546366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.546399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.546586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.546622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.546803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.546838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.547071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.547104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.547253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.547478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.547514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.547701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.547734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.547927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.547963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.548152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.548194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.548373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.548406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.548591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.548623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.548761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.548794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.548979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.549012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.549170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.549202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.549354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.549393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.549552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.178 [2024-07-13 05:26:05.549585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.178 qpair failed and we were unable to recover it. 00:36:59.178 [2024-07-13 05:26:05.549758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.549792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.549961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.549994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.550152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.550193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.550327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.550363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.550548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.550582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.550723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.550755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.550919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.550953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.551114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.551147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.551276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.551308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.551493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.551526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.551688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.551720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.551860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.551903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.552066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.552100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.552242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.552274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.552431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.552463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.552633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.552666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.552843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.552911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.553070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.553102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.553272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.553304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.553482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.553515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.553704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.553737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.553882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.553915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.554096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.554128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.554292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.554325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.554485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.554517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.554672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.554714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.554877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.555053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.555086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.555247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.555279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.555445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.555477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.555665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.555698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.555876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.555909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.556263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.556448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.556640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.556807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.556976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.557166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.557360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.557528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.557742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.557904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.557937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.558077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.558114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.558304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.558336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.558491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.558523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.558684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.558717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.558859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.559086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.559139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.559337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.559373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.559582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.559614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.559786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.559818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.559961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.559995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.560156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.560188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.560346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.560379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.560513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.560546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.560676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.560708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.560846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.560897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.561057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.561089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.561256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.561288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.561420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.179 [2024-07-13 05:26:05.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.179 qpair failed and we were unable to recover it. 00:36:59.179 [2024-07-13 05:26:05.561612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.561644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.561802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.561833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.562940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.562974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.563129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.563165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.563350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.563388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.563602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.563635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.563793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.563826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.563991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.564024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.564219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.564251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.564411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.564444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.564676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.564708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.564896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.564930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.565087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.565120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.565264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.565315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.565520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.565552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.565681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.565714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.565878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.565911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.566071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.566108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.566280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.566316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.566536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.566573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.566837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.566882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.567065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.567096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.567255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.567288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.567473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.567515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.567700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.567736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.567927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.567964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.568128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.568160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.568333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.568365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.568500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.568532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.568748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.568785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.568942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.568978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.569137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.569169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.569306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.569338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.569531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.569563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.569761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.569793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.569955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.569988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.570114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.570148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.570303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.570336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.570464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.570496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.570679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.570714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.570864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.570904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.571092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.571125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.571286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.571319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.571453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.571485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.571685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.571722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.571882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.571915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.572042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.572074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.572247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.572283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.572449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.572482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.572640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.572672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.572825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.572857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.573048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.573081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.573209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.573242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.573426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.573458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.573601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.573636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.573842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.573883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.574050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.574082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.574211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.574244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.574435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.574467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.574645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.574680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.574899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.574936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.575094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.575127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.575310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.180 [2024-07-13 05:26:05.575343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.180 qpair failed and we were unable to recover it. 00:36:59.180 [2024-07-13 05:26:05.575488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.575539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.575719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.575751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.575914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.575948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.576077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.576110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.576277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.576310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.576481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.576513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.576690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.576722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.576862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.576901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.577088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.577125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.577305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.577341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.577513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.577544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.577703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.577735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.577903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.577936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.578095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.578127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.578253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.578468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.578500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.578653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.578686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.578837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.578876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.579036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.579228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.579434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.579629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.579821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.579983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.580144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.580372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.580538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.580731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.580938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.580971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.581129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.581161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.581326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.581358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.581489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.581522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.581702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.581735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.581928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.581961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.582137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.582174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.582328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.582364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.582541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.582577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.582784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.582816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.582980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.583151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.583337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.583523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.583708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.583902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.583935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.584910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.584943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.585113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.585145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.585328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.585360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.585549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.585582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.585755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.585787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.585941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.585974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.586159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.586192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.586325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.586358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.586518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.586550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.586682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.586714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.586879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.586912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.587086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.587118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.587270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.181 [2024-07-13 05:26:05.587306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.181 qpair failed and we were unable to recover it. 00:36:59.181 [2024-07-13 05:26:05.587466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.587498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.587644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.587678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.587862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.587901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.588027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.588059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.588190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.588223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.588387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.588421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.588592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.588804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.588841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.589032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.589066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.589250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.589285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.589420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.589456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.589636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.589824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.589857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.590032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.590065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.590227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.590259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.590426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.590459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.590620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.590661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.590817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.590849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.591060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.591226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.591441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.591624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.591830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.591976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.592177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.592346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.592580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.592783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.592959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.592993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.593125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.593157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.593343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.593375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.593526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.593558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.593745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.593777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.593950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.593983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.594112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.594145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.594331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.594363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.594532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.594564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.594733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.594766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.594891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.594933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.595072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.595127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.595306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.595343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.595490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.595522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.595708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.595740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.595900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.595933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.596117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.596284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.596612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.596827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.596998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.597220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.597416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.597591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.597761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.597963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.597996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.598154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.598186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.598374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.598407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.598597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.598629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.598794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.598826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.598990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.599160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.599339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.599525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.599690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.599911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.599944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.600100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.600132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.600271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.600303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.600458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.182 [2024-07-13 05:26:05.600491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.182 qpair failed and we were unable to recover it. 00:36:59.182 [2024-07-13 05:26:05.600645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.600837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.600898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.601062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.601095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.601252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.601468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.601500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.601663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.601695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.601852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.601894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.602074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.602110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.602262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.602298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.602478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.602510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.602697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.602729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.602886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.602979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.603162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.603354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.603386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.603542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.603575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.603757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.603789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.603945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.603978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.604114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.604146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.604309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.604341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.604469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.604502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.604641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.604673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.604805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.604837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.605009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.605071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.605228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.605263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.605458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.605656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.605823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.605855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.606928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.606961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.607093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.607126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.607282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.607314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.607477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.607509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.607668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.607701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.607833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.607873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.608042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.608075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.608258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.608291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.608455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.608489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.608621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.608653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.608828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.608861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.609072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.609105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.609290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.609323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.609507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.609543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.609742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.609795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.609987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.610180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.610338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.610529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.610718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.610942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.610975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.611109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.611142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.611299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.611331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.611460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.611492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.611618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.611669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.611851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.611890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.612940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.612973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.613107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.613140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.613308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.613339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.613508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.613542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.183 [2024-07-13 05:26:05.613687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.183 [2024-07-13 05:26:05.613728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.183 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.613891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.613923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.614050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.614082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.614239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.614272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.614403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.614452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.614699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.614731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.614891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.614925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.615116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.615148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.615305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.615337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.615471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.615503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.615665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.615697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.615856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.615896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.616059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.616273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.616309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.616490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.616526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.616744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.616781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.616949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.616983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.617143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.617175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.617358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.617409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.617635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.617683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.617888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.617922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.618076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.618110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.618269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.618306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.618479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.618516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.618694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.618735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.618929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.618963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.619121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.619154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.619342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.619379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.619528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.619561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.619723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.619756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.619938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.619971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.620131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.620180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.620357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.620394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.620596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.620633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.620794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.620827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.621008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.621041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.621201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.621234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.621410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.621443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.621631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.621667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.621808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.621840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.622026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.622059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.622268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.622300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.622470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.622506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.622705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.622741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.622897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.622934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.623075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.623107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.623232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.623264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.623450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.623482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.623635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.623668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.623826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.623858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.624062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.624122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.624296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.624337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.624552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.624590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.624796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.624830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.624982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.625017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.625178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.625236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.625478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.625518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.625738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.625798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.625966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.626002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.626189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.626226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.626374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.626430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.626640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.626677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.184 qpair failed and we were unable to recover it. 00:36:59.184 [2024-07-13 05:26:05.626836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.184 [2024-07-13 05:26:05.626874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.627031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.627064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.627240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.627283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.627481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.627517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.627785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.627818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.627995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.628029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.628218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.628519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.628577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.628851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.628890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.629080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.629113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.629260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.629292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.629591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.629656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.629848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.629890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.630049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.630379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.630438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.630675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.630713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.630982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.631016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.631151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.631185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.631384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.631417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.631613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.631671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.631876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.631909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.632068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.632101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.632263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.632296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.632561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.632619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.632785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.632818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.633064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.633098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.633261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.633294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.633464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.633498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.633654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.633687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.633830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.633863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.634051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.634086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.634225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.634277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.634483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.634520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.634744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.634780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.634967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.635000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.635162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.635196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.635362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.635411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.635647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.635683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.635852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.635892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.636059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.636092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.636261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.636322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.185 qpair failed and we were unable to recover it. 00:36:59.185 [2024-07-13 05:26:05.636649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.185 [2024-07-13 05:26:05.636716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.470 qpair failed and we were unable to recover it. 00:36:59.470 [2024-07-13 05:26:05.636876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.470 [2024-07-13 05:26:05.636915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.470 qpair failed and we were unable to recover it. 00:36:59.470 [2024-07-13 05:26:05.637059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.470 [2024-07-13 05:26:05.637092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.470 qpair failed and we were unable to recover it. 00:36:59.470 [2024-07-13 05:26:05.637226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.470 [2024-07-13 05:26:05.637270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.470 qpair failed and we were unable to recover it. 00:36:59.470 [2024-07-13 05:26:05.637426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.470 [2024-07-13 05:26:05.637460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.470 qpair failed and we were unable to recover it. 00:36:59.470 [2024-07-13 05:26:05.637623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.637656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.637921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.637956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.638130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.638167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.638352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.638402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.638572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.638609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.638768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.638806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.638997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.639031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.639194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.639230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.639464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.639713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.639747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.639954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.639987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.640232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.640265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.640491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.640527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.640745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.640782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.640979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.641014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.641265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.641311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.641476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.641536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.641778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.641825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.642017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.642079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.642286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.642324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.642515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.642553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.642738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.642774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.642938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.642974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.643168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.643205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.643348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.643384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.643524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.643558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.643724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.643758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.644020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.644054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.644200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.644233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.644401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.644435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.644599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.644634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.644798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.644831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.645003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.645037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.645234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.645287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.645463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.645500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.645680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.645718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.645983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.646021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.646231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.646268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.646451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.646508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.646699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-07-13 05:26:05.646736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.471 qpair failed and we were unable to recover it. 00:36:59.471 [2024-07-13 05:26:05.646960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.646994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.647135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.647169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.647330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.647363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.647495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.647545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.647703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.647927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.647960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.648095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.648127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.648265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.648298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.648457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.648490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.648775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.648812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.649028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.649062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.649239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.649282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.649459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.649496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.649713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.649755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.649920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.649954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.650113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.650146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.650342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.650376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.650598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.650631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.650789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.650842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.651011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.651044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.651239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.651271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.651485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.651700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.651736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.651931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.651965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.652104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.652153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.652329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.652381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.652618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.652663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.652800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.652836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.653006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.653039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.653230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.653266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.653415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.653452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.653677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.653711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.653918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.653952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.654136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.654178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.654400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.654454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.654694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.654731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.654927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.654966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.655100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.655133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.655353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.655386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.472 [2024-07-13 05:26:05.655537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-07-13 05:26:05.655573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.472 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.655799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.655832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.655988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.656021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.656148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.656180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.656343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.656375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.656537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.656585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.656791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.656828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.657026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.657060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.657192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.657235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.657395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.657427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.657659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.657691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.657860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.657905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.658042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.658075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.658248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.658281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.658474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.658542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.658732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.658764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.658947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.658980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.659117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.659152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.659332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.659371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.659555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.659593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.659800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.660011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.660043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.660192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.660224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.660424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.660456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.660597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.660630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.660794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.660846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.661025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.661058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.661240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.661275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.661434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.661470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.661667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.661704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.661884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.661942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.662109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.662141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.662338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.662396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.662591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.662628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.662796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.662830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.663007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.663040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.663221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.663256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.663571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.663634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.663838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.663878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.664033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.664229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.664261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.473 qpair failed and we were unable to recover it. 00:36:59.473 [2024-07-13 05:26:05.664526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.473 [2024-07-13 05:26:05.664581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.664756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.664792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.664970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.665002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.665137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.665330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.665365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.665554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.665825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.665861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.666035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.666067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.666238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.666275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.666476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.666513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.666722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.666759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.666952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.666986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.667123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.667155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.667308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.667345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.667529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.667566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.667771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.667808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.668016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.668051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.668246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.668313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.668517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.668554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.668757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.668793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.668989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.669023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.669182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.669214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.669393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.669430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.669610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.669646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.669851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.669910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.670046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.670079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.670255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.670288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.670496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.670703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.670740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.670936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.670970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.671133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.671165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.671344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.671381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.671548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.671736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.671773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.671988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.672022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.672278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.672332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.672592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.672653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.672863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.672921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.673058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.673091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.673330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.673363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.673538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.474 [2024-07-13 05:26:05.673574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.474 qpair failed and we were unable to recover it. 00:36:59.474 [2024-07-13 05:26:05.673743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.673780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.673981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.674014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.674222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.674257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.674439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.674475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.674675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.674711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.674930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.674964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.675108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.675164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.675375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.675408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.675608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.675645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.675862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.675909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.676065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.676096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.676283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.676319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.676496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.676533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.676699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.676732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.676887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.676941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.677133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.677165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.677323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.677356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.677510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.677547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.677759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.677796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.677953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.677985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.678123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.678160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.678330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.678364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.678504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.678541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.678756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.678792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.678946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.678984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.679158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.679191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.679365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.679401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.679593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.679629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.679793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.679824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.680002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.680034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.680219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.680266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.680449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.680482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.680689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.475 [2024-07-13 05:26:05.680725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.475 qpair failed and we were unable to recover it. 00:36:59.475 [2024-07-13 05:26:05.680905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.680944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.681134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.681167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.681376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.681413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.681633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.681670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.681906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.681941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.682112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.682154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.682373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.682409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.682586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.682627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.682762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.682794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.682967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.683019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.683210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.683243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.683425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.683468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.683653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.683690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.683897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.683930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.684135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.684172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.684319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.684518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.684550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.684683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.684734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.684922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.684959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.685126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.685164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.685301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.685351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.685556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.685592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.685779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.685813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.685977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.686182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.686395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.686563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.686766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.686936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.686971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.687123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.687164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.687342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.687378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.687569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.687602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.687823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.688020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.688057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.688241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.688280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.688472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.688509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.688685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.688722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.688902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.688936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.689095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.689126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.689330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.476 [2024-07-13 05:26:05.689367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.476 qpair failed and we were unable to recover it. 00:36:59.476 [2024-07-13 05:26:05.689558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.689591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.689797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.689834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.689997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.690031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.690204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.690237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.690395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.690431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.690618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.690656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.690809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.690842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.691036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.691068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.691284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.691318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.691511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.691544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.691733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.691769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.691961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.691998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.692159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.692192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.692360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.692394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.692601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.692638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.692825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.692858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.693024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.693060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.693278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.693312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.693473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.693505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.693674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.693711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.693886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.693922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.694087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.694123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.694303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.694339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.694517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.694553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.694760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.694793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.694959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.695155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.695194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.695381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.695414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.695554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.695823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.695873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.696102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.696135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.696326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.696375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.696586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.696623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.696834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.696873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.697057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.697094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.697274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.697312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.697456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.697488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.697630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.697662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.697828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.697861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.698044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.698077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.477 [2024-07-13 05:26:05.698266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.477 [2024-07-13 05:26:05.698298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.477 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.698480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.698512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.698689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.698725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.698954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.698989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.699128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.699161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.699297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.699331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.699485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.699521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.699672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.699709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.699859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.699910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.700050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.700084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.700288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.700340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.700518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.700559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.700725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.700780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.700984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.701020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.701211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.701258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.701417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.701455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.701638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.701677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.701889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.701924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.702060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.702094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.702257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.702308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.702547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.702585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.702796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.702834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.703034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.703069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.703255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.703303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.703502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.703556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.703706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.703758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.703930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.703965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.704153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.704205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.704424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.704476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.704722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.704781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.704947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.704980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.705135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.705189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.705407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.705458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.705643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.705694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.705829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.705863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.706108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.706162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.706329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.706370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.706585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.706643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.706855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.706897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.707072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.707106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.707370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.707419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.478 qpair failed and we were unable to recover it. 00:36:59.478 [2024-07-13 05:26:05.707668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.478 [2024-07-13 05:26:05.707727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.707981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.708015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.708195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.708232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.708477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.708532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.708712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.708748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.708936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.708970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.709129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.709161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.709343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.709378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.709611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.709666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.709822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.709854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.710021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.710053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.710227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.710279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.710491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.710531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.710725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.710760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.710895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.710929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.711100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.711134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.711319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.711357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.711562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.711816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.711853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.712049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.712083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.712262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.712298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.712528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.712587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.712757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.712789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.712933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.712968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.713149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.713192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.713350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.713387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.713649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.713878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.713912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.714102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.714168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.714458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.714518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.714761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.714819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.715017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.715051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.715223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.715283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.715441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.715474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.715634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.479 [2024-07-13 05:26:05.715670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.479 qpair failed and we were unable to recover it. 00:36:59.479 [2024-07-13 05:26:05.715901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.715952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.716141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.716180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.716364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.716552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.716587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.716875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.716912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.717117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.717168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.717386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.717456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.717747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.717803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.717999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.718033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.718243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.718279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.718538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.718769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.718805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.719026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.719059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.719254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.719303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.719507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.719560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.719777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.719834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.720044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.720078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.720289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.720341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.720548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.720608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.720956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.720991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.721161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.721214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.721385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.721422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.721597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.721633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.721827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.721871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.722073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.722121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.722299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.722353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.722561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.722620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.722804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.722842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.723007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.723041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.723225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.723262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.723518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.723840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.723904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.724090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.724123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.724303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.724340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.724596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.724653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.724834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.724878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.725064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.725235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.725267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.725428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.725476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.725719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.725775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.480 qpair failed and we were unable to recover it. 00:36:59.480 [2024-07-13 05:26:05.725947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.480 [2024-07-13 05:26:05.725980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.726138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.726195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.726423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.726800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.726857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.727023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.727055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.727211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.727261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.727440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.727500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.727710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.727746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.727935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.727983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.728146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.728406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.728458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.728675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.728736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.728926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.728959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.729149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.729187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.729395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.729432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.729682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.729741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.729906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.729940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.730126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.730167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.730306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.730356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.730599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.730649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.730832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.730875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.731035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.731067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.731236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.731305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.731507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.731544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.731719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.731755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.731922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.731955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.732084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.732135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.732286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.732324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.732581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.732618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.732795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.732831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.733021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.733054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.733296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.733364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.733657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.733715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.733856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.733904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.734072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.734105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.734360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.734411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.734603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.734654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.734790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.734824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.734989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.735023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.735204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.735256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.735554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.481 [2024-07-13 05:26:05.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.481 qpair failed and we were unable to recover it. 00:36:59.481 [2024-07-13 05:26:05.735794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.735834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.736119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.736154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.736382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.736434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.736612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.736678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.736848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.736890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.737088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.737141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.737300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.737352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.737543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.737594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.737836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.737876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.738051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.738083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.738334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.738385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.738516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.738551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.738694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.738911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.738969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.739163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.739203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.739432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.739471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.739647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.739687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.739896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.739957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.740173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.740210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.740424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.740488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.740665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.740703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.740930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.740964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.741127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.741162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.741387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.741452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.741719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.741779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.742046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.742080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.742243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.742280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.742470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.742507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.742687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.742744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.742917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.742950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.743105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.743137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.743298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.743517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.743577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.743729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.743944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.743978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.744118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.744167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.744352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.744390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.744602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.744638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.744776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.744813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.745005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.745038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.482 [2024-07-13 05:26:05.745181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.482 [2024-07-13 05:26:05.745217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.482 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.745418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.745454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.745638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.745688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.745851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.745890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.746070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.746118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.746329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.746383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.746610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.746647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.746889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.746941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.747103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.747162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.747402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.747440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.747650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.747708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.747934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.747981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.748113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.748145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.748278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.748310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.748484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.748536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.748683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.748752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.748969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.749003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.749211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.749491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.749527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.749696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.749729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.749916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.749950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.750109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.750142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.750334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.750391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.750578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.750616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.750861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.750903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.751057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.751089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.751279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.751314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.751496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.751533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.751711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.751751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.751977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.752010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.752173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.752205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.752365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.752428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.752605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.752648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.752833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.752871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.753060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.753091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.483 qpair failed and we were unable to recover it. 00:36:59.483 [2024-07-13 05:26:05.753261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.483 [2024-07-13 05:26:05.753297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.753479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.753516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.753667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.753703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.753869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.753903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.754085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.754117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.754337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.754388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.754561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.754596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.754767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.754804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.754983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.755017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.755157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.755190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.755388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.755430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.755628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.755665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.755844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.755894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.756090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.756123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.756312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.756344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.756474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.756505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.756663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.756713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.756861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.756905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.757108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.757140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.757315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.757346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.757530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.757565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.757745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.757782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.757965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.757998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.758165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.758219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.758413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.758453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.758640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.758677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.758862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.758918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.759142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.759176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.759343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.759377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.759534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.759568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.759732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.759765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.759927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.759961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.760180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.760217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.760371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.760409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.760599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.760633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.760786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.760820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.761029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.761064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.761201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.761241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.761483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.761543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.761736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.484 [2024-07-13 05:26:05.761772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.484 qpair failed and we were unable to recover it. 00:36:59.484 [2024-07-13 05:26:05.761997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.762031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.762273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.762332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.762511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.762547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.762709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.762741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.762921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.762956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.763110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.763146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.763346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.763380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.763602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.763671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.763890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.763941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.764137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.764170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.764362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.764418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.764600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.764637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.764851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.764892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.765098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.765135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.765348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.765386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.765578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.765611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.765797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.765836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.766062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.766098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.766261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.766294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.766450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.766702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.766738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.766894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.766927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.767068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.767119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.767267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.767302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.767480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.767513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.767724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.767760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.767937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.767977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.768157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.768189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.768405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.768458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.768630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.768665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.768861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.768910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.769100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.769148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.769326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.769362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.769520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.769553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.769731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.769767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.769957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.769995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.770156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.770188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.770330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.770368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.770560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.770596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.770804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.770836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.485 [2024-07-13 05:26:05.771013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.485 [2024-07-13 05:26:05.771050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.485 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.771225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.771261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.771476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.771508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.771659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.771908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.771946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.772114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.772147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.772276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.772327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.772507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.772542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.772720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.772752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.772995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.773043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.773237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.773291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.773505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.773540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.773749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.773787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.773983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.774021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.774202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.774235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.774493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.774526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.774666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.774721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.774920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.774954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.775153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.775190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.775373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.775411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.775607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.775642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.775775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.775809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.776005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.776039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.776214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.776247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.776460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.776521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.776696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.776750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.776951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.776985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.777170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.777207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.777358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.777570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.777602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.777747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.777786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.777978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.778012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.778152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.778419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.778478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.778666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.778704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.778887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.778922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.779086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.779119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.779307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.779351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.779537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.779571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.779752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.779789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.779958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.486 [2024-07-13 05:26:05.779997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.486 qpair failed and we were unable to recover it. 00:36:59.486 [2024-07-13 05:26:05.780196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.780230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.780451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.780515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.780725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.780762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.780928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.780961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.781128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.781162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.781313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.781349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.781530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.781563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.781707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.781742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.781911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.781964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.782143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.782177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.782417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.782473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.782685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.782718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.782883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.782917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.783049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.783083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.783296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.783333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.783488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.783520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.783657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.783708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.783864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.783920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.784102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.784135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.784294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.784345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.784519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.784556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.784755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.784790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.784960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.784996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.785218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.785256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.785414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.785448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.785611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.785645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.785837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.785884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.786081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.786115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.786358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.786418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.786620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.786658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.786880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.786930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.787085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.787304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.787341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.787506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.787538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.787675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.787727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.787928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.787964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.788145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.788183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.788362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.788588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.788620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.788755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.788787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.487 qpair failed and we were unable to recover it. 00:36:59.487 [2024-07-13 05:26:05.788957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.487 [2024-07-13 05:26:05.788991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.789209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.789241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.789404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.789437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.789569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.789603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.789758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.789807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.789975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.790008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.790219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.790255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.790423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.790460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.790677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.790709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.790885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.790922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.791104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.791141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.791322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.791354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.791537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.791572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.791749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.791786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.791946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.791979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.792184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.792221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.792426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.792473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.792665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.792697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.792884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.792934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.793078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.793110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.793273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.793306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.793489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.793522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.793680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.793900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.793956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.794159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.794207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.794375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.794411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.794606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.794646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.794858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.794905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.795068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.795102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.795239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.795274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.795580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.795636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.795819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.795853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.796031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.796066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.796230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.796264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.796424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.488 [2024-07-13 05:26:05.796457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.488 qpair failed and we were unable to recover it. 00:36:59.488 [2024-07-13 05:26:05.796589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.796622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.796808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.796845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.797024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.797057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.797219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.797252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.797510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.797566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.797745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.797780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.797989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.798162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.798330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.798494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.798718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.798947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.798980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.799114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.799147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.799284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.799317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.799480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.799513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.799675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.799708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.799870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.799904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.800033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.800066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.800193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.800226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.800495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.800553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.800732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.800769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.800957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.800991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.801156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.801189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.801322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.801355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.801558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.801594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.801783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.801816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.802019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.802052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.802212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.802245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.802491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.802554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.802764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.802800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.803014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.803048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.803191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.803224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.803383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.803416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.803596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.803632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.803818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.804001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.804035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.804187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.804224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.804438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.804497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.804697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.804733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.804918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.804952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.489 qpair failed and we were unable to recover it. 00:36:59.489 [2024-07-13 05:26:05.805120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.489 [2024-07-13 05:26:05.805153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.805316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.805349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.805513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.805565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.805717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.805753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.805963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.805996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.806131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.806164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.806385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.806462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.806662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.806717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.806936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.806985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.807125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.807159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.807315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.807352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.807553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.807590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.807772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.807809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.807993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.808026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.808338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.808669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.808928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.808962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.809123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.809166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.809329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.809362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.809495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.809528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.809684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.809720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.809932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.809965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.810093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.810127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.810326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.810362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.810514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.810550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.810707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.810758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.810950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.810983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.811142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.811175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.811327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.811369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.811546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.811583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.811758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.811796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.811980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.812014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.812173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.812206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.812334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.812367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.812539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.812575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.812842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.812886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.813075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.813108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.813275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.813310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.813524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.813561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.813734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.813771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.813949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.490 [2024-07-13 05:26:05.813984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.490 qpair failed and we were unable to recover it. 00:36:59.490 [2024-07-13 05:26:05.814120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.814153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.814312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.814345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.814525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.814561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.814734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.814772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.814977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.815025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.815174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.815210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.815491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.815550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.815754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.815794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.815982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.816027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.816213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.816247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.816410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.816460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.816621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.816658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.816885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.816947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.817110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.817152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.817326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.817359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.817554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.817798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.817836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.818011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.818045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.818231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.818264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.818426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.818459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.818619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.818653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.818810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.818849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.819062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.819114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.819283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.819323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.819697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.819764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.819929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.820116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.820165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.820342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.820380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.820623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.820679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.820849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.820920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.821084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.821117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.821288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.821340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.821485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.821522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.821720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.821756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.821928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.821963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.822129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.822161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.822352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.822385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.822606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.822665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.822860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.822904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.823041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.823092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.491 [2024-07-13 05:26:05.823305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.491 [2024-07-13 05:26:05.823362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.491 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.823549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.823586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.823759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.823796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.824013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.824062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.824286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.824326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.824531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.824745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.824782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.824999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.825034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.825211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.825246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.825439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.825510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.825717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.825755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.825972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.826007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.826173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.826215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.826374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.826407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.826634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.826672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.826844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.826894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.827084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.827127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.827287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.827321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.827534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.827595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.827775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.827812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.828023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.828060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.828222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.828255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.828389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.828421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.828579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.828612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.828774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.828812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.829018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.829053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.829210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.829243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.829405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.829443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.829612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.829645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.829802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.829835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.830072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.830324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.830491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.830681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.492 [2024-07-13 05:26:05.830845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.492 qpair failed and we were unable to recover it. 00:36:59.492 [2024-07-13 05:26:05.830994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.831027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.831165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.831198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.831389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.831426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.831610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.831644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.831800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.831833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.831998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.832195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.832365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.832556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.832755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.832919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.832965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.833126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.833160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.833321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.833518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.833551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.833711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.833744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.833903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.833937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.834119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.834152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.834338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.834371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.834523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.834556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.834691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.834724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.834886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.834919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.835059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.835093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.835224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.835257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.835457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.835494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.835650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.835684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.835843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.835903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.836082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.836118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.836295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.836328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.836513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.836546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.836750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.836787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.836954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.836988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.837187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.837224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.837370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.837411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.837618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.837651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.837813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.837846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.838011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.838044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.838211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.838243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.838371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.838404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.838533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.838566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.838759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.493 [2024-07-13 05:26:05.838791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.493 qpair failed and we were unable to recover it. 00:36:59.493 [2024-07-13 05:26:05.839022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.839070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.839241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.839277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.839452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.839487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.839645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.839679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.839837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.839885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.840073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.840106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.840252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.840285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.840420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.840454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.840619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.840652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.840840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.840881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.841052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.841085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.841232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.841265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.841403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.841438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.841627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.841666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.841851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.841897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.842061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.842268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.842300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.842489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.842522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.842708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.842742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.842940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.842980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.843163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.843196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.843381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.843414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.843595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.843629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.843885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.843947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.844109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.844164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.844340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.844378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.844561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.844595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.844754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.844787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.844977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.845011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.845142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.845175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.845342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.845377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.845574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.845608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.845768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.845807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.846009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.846057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.846271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.846310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.846497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.846734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.846771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.846916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.846954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.847140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.847173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.847310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.847342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.494 [2024-07-13 05:26:05.847501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.494 [2024-07-13 05:26:05.847533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.494 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.847665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.847699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.847920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.847990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.848197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.848233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.848366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.848401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.848560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.848593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.848762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.848796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.848963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.848998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.849207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.849244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.849421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.849458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.849644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.849678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.849845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.849888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.850117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.850155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.850337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.850371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.850577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.850614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.850806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.850840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.851024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.851057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.851239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.851304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.851519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.851554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.851788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.851826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.852041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.852076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.852241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.852278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.852461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.852495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.852679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.852712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.852835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.852877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.853068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.853101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.853255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.853288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.853416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.853455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.853619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.853653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.853814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.853852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.854039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.854076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.854225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.854259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.854421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.854459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.854619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.854652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.854826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.854859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.855043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.855090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.855246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.855283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.855495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.855529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.855694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.855728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.855885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.855928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.856083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.856116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.495 qpair failed and we were unable to recover it. 00:36:59.495 [2024-07-13 05:26:05.856280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.495 [2024-07-13 05:26:05.856313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.856471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.856507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.856693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.856726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.856856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.856929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.857108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.857145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.857331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.857364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.857527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.857560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.857692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.857725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.857911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.857944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.858161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.858323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.858366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.858556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.858589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.858719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.858752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.858902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.858939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.859144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.859177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.859416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.859471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.859680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.859717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.859997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.860031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.860212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.860259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.860457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.860493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.860627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.860662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.860827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.860862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.861040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.861074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.861270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.861304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.861465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.861500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.861683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.861721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.861929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.861964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.862126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.862164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.862348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.862385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.862566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.862599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.862787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.862820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.862998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.863036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.863175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.863209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.863418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.863455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.863636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.863673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.496 [2024-07-13 05:26:05.863825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.496 [2024-07-13 05:26:05.863857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.496 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.864068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.864122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.864321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.864359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.864568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.864601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.864764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.864797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.864999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.865034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.865205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.865238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.865414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.865452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.865627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.865671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.865862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.865911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.866081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.866115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.866256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.866291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.866446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.866479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.866664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.866923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.866960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.867115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.867155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.867314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.867347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.867485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.867701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.867734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.867884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.867923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.868090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.868123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.868321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.868355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.868542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.868576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.868714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.868749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.868890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.868926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.869082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.869116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.869299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.869337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.869511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.869544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.869679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.869712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.869850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.869892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.870039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.870073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.870311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.870381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.870552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.870589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.870768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.870801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.870957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.870992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.871162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.871195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.871357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.871395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.871567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.871602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.871788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.871821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.872055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.872088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.872253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.872286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.497 qpair failed and we were unable to recover it. 00:36:59.497 [2024-07-13 05:26:05.872417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.497 [2024-07-13 05:26:05.872450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.872584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.872617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.872798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.872831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.873914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.873948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.874133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.874176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.874363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.874396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.874565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.874599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.874787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.874820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.874993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.875026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.875168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.875201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.875411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.875625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.875657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.875821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.875854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.876030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.876064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.876224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.876257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.876507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.876565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.876740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.876776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.876944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.876977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.877165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.877197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.877334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.877367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.877527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.877560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.877736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.877773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.877929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.877966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.878157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.878191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.878350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.878383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.878515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.878737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.878770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.878950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.878987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.879155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.879192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.879374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.879417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.879576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.879613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.879751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.879784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.879915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.879948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.880117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.880151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.880342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.880379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.498 [2024-07-13 05:26:05.880542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.498 [2024-07-13 05:26:05.880576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.498 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.880705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.880738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.880914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.880947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.881088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.881123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.881334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.881370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.881514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.881550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.881706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.881739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.881910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.881944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.882084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.882117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.882274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.882307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.882486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.882522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.882701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.882734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.882912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.882945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.883147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.883194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.883364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.883404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.883587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.883621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.883807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.884024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.884063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.884252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.884286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.884446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.884480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.884675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.884836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.884881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.885025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.885078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.885239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.885277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.885485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.885519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.885707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.885773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.885952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.885996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.886209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.886242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.886428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.886460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.886625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.886657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.886816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.886852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.887966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.887999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.888131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.888165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.888330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.888363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.888552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.888585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.888741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.499 [2024-07-13 05:26:05.888774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.499 qpair failed and we were unable to recover it. 00:36:59.499 [2024-07-13 05:26:05.888942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.888980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.889161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.889194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.889349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.889381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.889546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.889579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.889764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.889808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.890026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.890074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.890244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.890285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.890477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.890512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.890677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.890884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.890919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.891091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.891125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.891255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.891288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.891454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.891491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.891662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.891697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.891890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.891924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.892058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.892110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.892294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.892327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.892465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.892498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.892682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.892716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.892880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.892913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.893046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.893080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.893216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.893249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.893418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.893451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.893609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.893642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.893812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.893846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.894047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.894079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.894238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.894271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.894466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.894500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.894693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.894726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.894855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.894910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.895076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.895109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.895245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.895280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.895454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.895489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.895683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.895719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.895908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.895948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.896104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.896138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.896276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.896309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.896497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.896530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.896713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.896746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.896902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.500 [2024-07-13 05:26:05.896935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.500 qpair failed and we were unable to recover it. 00:36:59.500 [2024-07-13 05:26:05.897126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.897163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.897415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.897472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.897676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.897713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.897901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.897935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.898099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.898295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.898328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.898553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.898587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.898748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.898783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.898950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.898984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.899158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.899192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.899323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.899356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.899519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.899569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.899792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.899826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.900036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.900075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.900282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.900444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.900477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.900633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.900667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.900840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.900891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.901047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.901220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.901254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.901461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.901713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.901746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.901885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.902074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.902107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.902270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.902303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.902461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.902493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.902681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.902717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.902947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.902981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.903118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.903151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.903281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.903314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.903509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.903541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.903712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.903747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.903890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.903925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.904111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.904145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.904409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.904467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.501 qpair failed and we were unable to recover it. 00:36:59.501 [2024-07-13 05:26:05.904631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.501 [2024-07-13 05:26:05.904675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.904854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.904903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.905071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.905105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.905291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.905342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.905526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.905559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.905694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.905890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.905943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.906146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.906178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.906337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.906369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.906516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.906550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.906737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.906770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.906933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.906967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.907130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.907181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.907379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.907412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.907570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.907603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.907768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.907801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.907944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.907977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.908135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.908168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.908349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.908385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.908588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.908621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.908800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.908998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.909031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.909187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.909219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.909378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.909411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.909544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.909576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.909729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.909766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.909969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.910023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.910246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.910285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.910456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.910490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.910660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.910694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.910827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.910861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.911041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.911075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.911254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.911306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.911485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.911535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.911702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.911736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.911940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.911978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.912125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.912162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.912346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.912379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.912525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.912558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.912712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.912745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.502 qpair failed and we were unable to recover it. 00:36:59.502 [2024-07-13 05:26:05.912982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.502 [2024-07-13 05:26:05.913016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.913186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.913220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.913381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.913432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.913613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.913645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.913822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.913857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.914040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.914074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.914240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.914273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.914404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.914438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.914604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.914656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.914841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.914883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.915080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.915262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.915299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.915510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.915543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.915693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.915728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.915887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.915920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.916098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.916130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.916294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.916327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.916503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.916539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.916706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.916742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.916968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.917003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.917158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.917191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.917348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.917381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.917567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.917601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.917829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.917874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.918086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.918119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.918282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.918315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.918476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.918514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.918682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.918716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.918844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.918884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.919017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.919051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.919238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.919271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.919450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.919507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.919687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.919723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.919931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.919965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.920103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.920139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.920345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.920382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.920565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.920598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.920770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.920804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.921048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.921083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.921214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.921249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.503 qpair failed and we were unable to recover it. 00:36:59.503 [2024-07-13 05:26:05.921420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.503 [2024-07-13 05:26:05.921454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.921623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.921656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.921813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.921850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.922063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.922097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.922262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.922312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.922464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.922498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.922676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.922715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.922874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.922926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.923121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.923154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.923283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.923315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.923542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.923578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.923765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.923799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.923985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.924022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.924201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.924238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.924420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.924453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.924628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.924661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.924804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.924847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.925024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.925056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.925218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.925250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.925409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.925449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.925663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.925696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.925852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.925913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.926056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.926262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.926296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.926468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.926503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.926685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.926722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.926908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.927112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.927151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.927318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.927355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.927539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.927573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.927701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.927733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.927922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.927956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.928114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.928147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.928353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.928390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.928554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.928587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.928750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.928782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.928965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.929003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.929177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.929213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.929369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.929402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.929562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.929595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.504 [2024-07-13 05:26:05.929733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.504 [2024-07-13 05:26:05.929766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.504 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.929933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.929967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.930160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.930214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.930433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.930469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.930632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.930667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.930806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.930840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.930991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.931025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.931190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.931224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.931419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.931453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.931611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.931644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.931803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.931836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.932019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.932056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.932210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.932246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.932436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.932468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.932632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.932665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.932848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.932915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.933113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.933146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.933379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.933435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.933608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.933644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.933823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.933856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.934034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.934226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.934396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.934567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.934812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.934979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.935013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.935173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.935211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.935403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.935439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.935615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.935648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.935780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.935813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.935996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.936174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.936339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.936527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.936720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.936897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.505 [2024-07-13 05:26:05.936931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.505 qpair failed and we were unable to recover it. 00:36:59.505 [2024-07-13 05:26:05.937117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.937149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.937312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.937345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.937549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.937586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.937761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.937797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.937969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.938003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.938224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.938276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.938465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.938505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.938663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.938698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.938875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.938910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.939116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.939153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.939372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.939589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.939647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.939842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.939889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.940057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.940091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.940254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.940288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.940454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.940487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.940652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.940686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.940855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.940897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.941067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.941103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.941277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.941310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.941495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.941556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.941733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.941769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.941936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.941971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.942177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.942243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.942427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.942463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.942648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.942680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.942824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.942859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.943038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.943073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.943260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.943294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.943627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.943683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.943874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.943934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.944101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.944135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.944278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.944313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.944508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.944559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.944718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.944762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.944946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.944984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.945134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.945171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.945354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.945589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.945654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.945798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.506 [2024-07-13 05:26:05.945834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.506 qpair failed and we were unable to recover it. 00:36:59.506 [2024-07-13 05:26:05.945998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.946031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.946199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.946232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.946388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.946420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.946625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.946657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.946827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.946860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.947046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.947083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.947239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.947272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.947434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.947484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.947663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.947699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.947858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.947897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.948058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.948249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.948281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.948442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.948475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.948611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.948661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.948817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.948853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.949042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.949075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.949251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.949468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.949505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.949751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.949787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.949990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.950024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.950185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.950391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.950423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.950629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.950665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.950839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.950881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.951066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.951099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.951311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.951369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.951555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.951591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.951744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.951777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.951938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.951989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.952202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.952235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.952396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.952615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.952653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.952877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.952911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.953052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.953085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.953262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.953298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.953474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.953510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.953731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.953765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.953966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.954020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.954193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.954234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.507 [2024-07-13 05:26:05.954441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.507 [2024-07-13 05:26:05.954487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.507 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.954742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.954797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.954980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.955019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.955204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.955240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.955417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.955473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.955682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.955719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.955894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.955939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.956092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.956130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.956304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.956341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.956519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.956552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.956705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.956741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.956884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.956921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.957081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.957113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.957294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.957330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.957508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.957544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.957726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.957758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.957935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.957973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.958145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.958182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.958401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.958433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.958600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.958633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.958814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.958852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.959025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.959058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.959240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.959276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.959483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.959520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.959759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.959795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.959979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.960013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.960194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.960229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.796 qpair failed and we were unable to recover it. 00:36:59.796 [2024-07-13 05:26:05.960447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.796 [2024-07-13 05:26:05.960480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.960693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.960761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.960929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.960963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.961094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.961126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.961291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.961328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.961536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.961573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.961732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.961765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.961969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.962007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.962154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.962190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.962381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.962413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.962575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.962608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.962770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.962820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.963035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.963275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.963453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.963843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.963994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.964031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.964215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.964249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.964455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.964491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.964708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.964741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.964916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.964950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.965159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.965195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.965372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.965408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.965562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.965594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.965770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.965806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.965989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.966023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.966150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.966183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.966356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.966392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.966568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.966604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.966777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.966813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.966998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.967046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.967243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.967282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.967459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.967493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.967718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.967780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.967977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.968015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.968203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.968236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.968462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.968518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.968702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.968736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.968910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.968945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.797 qpair failed and we were unable to recover it. 00:36:59.797 [2024-07-13 05:26:05.969110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.797 [2024-07-13 05:26:05.969144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.969330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.969364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.969494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.969527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.969712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.969751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.969902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.969944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.970096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.970129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.970307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.970376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.970580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.970616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.970804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.970837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.970987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.971022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.971161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.971195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.971417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.971450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.971649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.971871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.971911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.972086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.972120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.972351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.972410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.972593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.972822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.972856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.973055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.973092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.973283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.973316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.973499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.973532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.973685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.973722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.973882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.973919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.974125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.974164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.974400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.974458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.974635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.974671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.974921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.974955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.975167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.975203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.975404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.975440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.975647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.975680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.975859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.975903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.976055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.976091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.976295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.976327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.976563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.976618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.976791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.976828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.977035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.977069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.977250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.977286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.977500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.977537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.977748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.977781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.977931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.977968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.978116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.798 [2024-07-13 05:26:05.978153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.798 qpair failed and we were unable to recover it. 00:36:59.798 [2024-07-13 05:26:05.978335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.978367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.978632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.978690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.978873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.978909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.979086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.979123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.979264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.979298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.979505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.979583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.979770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.979802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.979988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.980025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.980184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.980382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.980414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.980619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.980655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.980802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.980839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.981052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.981084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.981220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.981253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.981451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.981488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.981649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.981681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.981821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.981853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.982005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.982038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.982200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.982232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.982416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.982452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.982637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.982670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.982853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.982893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.983030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.983064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.983266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.983302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.983464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.983498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.983689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.983902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.983939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.984122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.984154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.984344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.984380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.984580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.984616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.984779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.984812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.984980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.985031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.985185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.985221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.985405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.985437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.985619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.985657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.985839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.985881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.986044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.986078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.986283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.986319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.986520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.799 [2024-07-13 05:26:05.986557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.799 qpair failed and we were unable to recover it. 00:36:59.799 [2024-07-13 05:26:05.986742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.986778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.986987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.987020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.987204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.987242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.987451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.987484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.987670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.987711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.987925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.987958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.988143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.988176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.988367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.988405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.988610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.988647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.988836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.989056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.989093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.989238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.989274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.989429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.989462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.989645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.989681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.989829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.989872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.990054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.990087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.990266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.990302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.990503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.990540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.990707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.990740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.990910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.991145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.991181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.991358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.991390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.991545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.800 [2024-07-13 05:26:05.991583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.800 qpair failed and we were unable to recover it. 00:36:59.800 [2024-07-13 05:26:05.991722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.991758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.991936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.991969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.992108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.992141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.992269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.992301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.992460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.992492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.992653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.992687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.992894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.992928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.993062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.993104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.993249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.993282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.993500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.993533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.993746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.993779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.993909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.993942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.994104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.994157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.994339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.994372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.994545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.994582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.994748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.994784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.994969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.995002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.995184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.995221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.995403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.995439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.995611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.995644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.995797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.995833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.996000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.996037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.996223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.996256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.996462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.996499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.996654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.996690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.996855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.996897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.997058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.997090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.997254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.997287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.997447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.997480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.997655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.997691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.997908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.997942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.998099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.998132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.998317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.998353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.998558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.998720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.998753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.998951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.998984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.999165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.999201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.801 qpair failed and we were unable to recover it. 00:36:59.801 [2024-07-13 05:26:05.999387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.801 [2024-07-13 05:26:05.999420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:05.999577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:05.999610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:05.999800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:05.999836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.000016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.000050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.000188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.000220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.000435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.000471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.000632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.000666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.000849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.000893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.001080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.001112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.001295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.001328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.001486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.001522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.001675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.001717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.001910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.001944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.002122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.002158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.002323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.002355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.002516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.002550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.002685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.002718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.002878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.002929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.003109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.003141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.003297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.003334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.003483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.003520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.003703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.003735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.003952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.003989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.004165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.004411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.004451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.004628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.004665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.004850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.004895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.005076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.005110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.005322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.005359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.005562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.005598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.005784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.005817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.006942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.006975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.007129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.007165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.007370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.007407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.007609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.007642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.007796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.802 [2024-07-13 05:26:06.007832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.802 qpair failed and we were unable to recover it. 00:36:59.802 [2024-07-13 05:26:06.008051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.008084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.008270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.008303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.008454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.008490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.008631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.008668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.008943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.008976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.009139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.009171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.009321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.009354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.009485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.009518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.009655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.009708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.009881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.009918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.010075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.010108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.010240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.010289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.010502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.010534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.010706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.010739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.010891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.010928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.011099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.011136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.011324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.011356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.011536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.011573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.011743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.011780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.011935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.011968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.012109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.012160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.012333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.012369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.012527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.012560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.012724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.012775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.012952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.012989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.013151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.013185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.013366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.013403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.013617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.013649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.013813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.013845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.014007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.014044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.014246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.014282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.014435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.014469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.014639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.014689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.014891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.014929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.015122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.015155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.015324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.015356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.015519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.015553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.015758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.015791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.015937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.015970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.016130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.016163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.803 [2024-07-13 05:26:06.016322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.803 [2024-07-13 05:26:06.016356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.803 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.016530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.016735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.016772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.016985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.017019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.017202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.017238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.017448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.017485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.017701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.017734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.017944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.017981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.018161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.018200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.018347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.018380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.018587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.018629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.018833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.018879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.019066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.019099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.019322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.019358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.019508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.019546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.019731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.019765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.019980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.020017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.020165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.020213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.020394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.020427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.020605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.020641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.020825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.020857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.021051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.021084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.021241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.021274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.021475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.021511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.021676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.021709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.021889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.021931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.022107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.022143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.022328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.022360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.022571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.022607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.022781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.022817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.023014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.023047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.023231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.023267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.023472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.023508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.023676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.023709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.804 [2024-07-13 05:26:06.023894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.804 [2024-07-13 05:26:06.023941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.804 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.024088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.024316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.024350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.024562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.024599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.024785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.024818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.024987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.025021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.025171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.025207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.025374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.025598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.025631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.025774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.025810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.026010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.026043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.026173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.026206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.026408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.026445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.026633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.026666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.026884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.026934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.027126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.027177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.027374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.027417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.027588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.027620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.027784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.027817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.028013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.028046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.028196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.028228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.028401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.028437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.028574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.028610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.028766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.028798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.029011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.029049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.029249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.029286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.029491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.029523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.805 [2024-07-13 05:26:06.029704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.805 [2024-07-13 05:26:06.029740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.805 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.029914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.029952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.030144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.030176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.030317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.030368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.030544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.030580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.030761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.030793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.030969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.031006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.031178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.031214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.031423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.031456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.031588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.031621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.031811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.031862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.032043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.032076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.032200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.032252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.032451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.032487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.032693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.032726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.032965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.032998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.033194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.033230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.033389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.033421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.033561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.033595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.033812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.033849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.034072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.034115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.034273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.034309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.034513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.034549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.034701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.034735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.034898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.034931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.035138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.035174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.035368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.035401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.035559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.035592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.035724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.035774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.035959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.035996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.036151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.806 [2024-07-13 05:26:06.036187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.806 qpair failed and we were unable to recover it. 00:36:59.806 [2024-07-13 05:26:06.036335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.036585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.036618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.036758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.036791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.036958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.036992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.037163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.037196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.037350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.037399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.037570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.037607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.037791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.037824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.037991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.038024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.038164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.038198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.038357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.038390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.038542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.038575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.038755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.038791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.038973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.039006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.039192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.039228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.039403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.039627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.039659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.039835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.039878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.040059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.040096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.040251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.040284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.040418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.040469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.040657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.040690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.040889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.040922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.041053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.041086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.041250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.041283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.041448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.041481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.041610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.041644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.041829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.041862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.042043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.042076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.042262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.042295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.042454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.042491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.042698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.042731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.042887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.042924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.043101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.043138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.043297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.043329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.043488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.043520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.043675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.043708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.043873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.807 [2024-07-13 05:26:06.043907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.807 qpair failed and we were unable to recover it. 00:36:59.807 [2024-07-13 05:26:06.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.044093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.044232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.044266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.044425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.044458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.044663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.044699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.044859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.044912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.045096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.045128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.045272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.045305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.045445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.045478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.045638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.045670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.045835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.045874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.046010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.046043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.046200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.046233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.046405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.046441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.046621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.046657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.046881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.046915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.047075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.047117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.047279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.047312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.047471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.047504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.047665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.047697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.047887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.048072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.048105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.048247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.048280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.048438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.048471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.048628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.048661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.048817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.048850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.049036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.049209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.049432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.049602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.049817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.049981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.050014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.050150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.050184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.050372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.050405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.050538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.050572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.050738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.050771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.050976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.051149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.051341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.051562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.051717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.051889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.808 [2024-07-13 05:26:06.051926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.808 qpair failed and we were unable to recover it. 00:36:59.808 [2024-07-13 05:26:06.052077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.052111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.052277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.052310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.052485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.052521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.052715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.052749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.052912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.052947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.053108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.053142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.053296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.053329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.053482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.053515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.053710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.053742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.053914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.053947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.054078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.054111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.054274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.054308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.054440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.054474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.054641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.054674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.054826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.054859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.055045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.055078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.055248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.055284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.055485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.055521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.055697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.055730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.055900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.055934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.056088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.056121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.056301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.056333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.056496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.056528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.056675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.056711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.056955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.056990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.057139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.057188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.057394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.057431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.057610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.057642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.057825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.057858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.058035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.058068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.058253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.058286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.058420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.058453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.058647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.058683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.058892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.058926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.059065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.059098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.059257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.059290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.059450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.059483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.059652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.059685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.059846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.059914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.060076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.060114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.060302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.809 [2024-07-13 05:26:06.060350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.809 qpair failed and we were unable to recover it. 00:36:59.809 [2024-07-13 05:26:06.060508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.060542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.060730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.060763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.060897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.060931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.061061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.061093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.061317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.061350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.061569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.061602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.061782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.061815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.061979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.062013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.062143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.062176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.062333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.062366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.062586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.062618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.062769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.062801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.062978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.063011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.063142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.063174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.063356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.063392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.063550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.063583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.063764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.063801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.063967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.064000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.064206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.064242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.064427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.064459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.064644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.064809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.064842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.065032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.065196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.065229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.065395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.065446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.065638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.065671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.065830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.065862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.066088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.066125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.066280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.066313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.066497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.066529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.066667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.066700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.066884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.066917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.067072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.067108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.067286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.810 [2024-07-13 05:26:06.067322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.810 qpair failed and we were unable to recover it. 00:36:59.810 [2024-07-13 05:26:06.067529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.067561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.067724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.067756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.067913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.067946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.068105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.068137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.068326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.068368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.068553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.068590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.068770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.068802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.068956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.068992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.069201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.069237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.069422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.069455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.069617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.069649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.069832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.069871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.070059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.070091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.070220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.070269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.070426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.070462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.070640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.070672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.070808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.070841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.071042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.071213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.071427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.071632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.071816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.071998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.072035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.072186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.072222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.072431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.072464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.072622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.072655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.072864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.072922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.073103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.073145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.073284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.073317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.073523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.073737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.073770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.073910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.073944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.074105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.074138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.074301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.074333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.074516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.074549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.074763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.074799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.075005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.075038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.075176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.075209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.075406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.075438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.075598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.075630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.811 [2024-07-13 05:26:06.075772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.811 [2024-07-13 05:26:06.075805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.811 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.075980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.076013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.076198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.076231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.076388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.076425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.076596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.076636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.076840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.076880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.077016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.077049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.077214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.077246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.077474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.077685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.077721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.077888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.078102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.078135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.078295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.078328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.078461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.078496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.078656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.078689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.078814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.078846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.079075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.079111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.079322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.079355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.079506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.079542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.079714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.079750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.079960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.079993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.080129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.080163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.080322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.080355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.080515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.080548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.080690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.080723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.080855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.080896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.081081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.081113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.081301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.081338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.081533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.081570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.081723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.081756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.081947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.081981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.082145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.082360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.082392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.082529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.082563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.082755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.082913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.082947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.083131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.083164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.083381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.083417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.083597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.083630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.083817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.083849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.083999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.084031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.812 qpair failed and we were unable to recover it. 00:36:59.812 [2024-07-13 05:26:06.084163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.812 [2024-07-13 05:26:06.084196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.084373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.084410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.084581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.084617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.084859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.084909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.085058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.085092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.085257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.085290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.085451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.085484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.085669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.085705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.085876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.085910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.086119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.086294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.086336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.086496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.086533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.086714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.086747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.086915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.086949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.087086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.087119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.087305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.087337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.087467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.087499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.087639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.087672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.087834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.087874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.088039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.088072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.088229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.088262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.088445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.088477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.088653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.088845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.088890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.089096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.089129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.089314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.089346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.089507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.089540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.089704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.089737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.089897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.090084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.090116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.090324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.090357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.090520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.090553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.090711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.090745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.090928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.090962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.091146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.091346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.091378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.091541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.091574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.091732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.091764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.091895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.091929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.092090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.092123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.092300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.813 [2024-07-13 05:26:06.092335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.813 qpair failed and we were unable to recover it. 00:36:59.813 [2024-07-13 05:26:06.092484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.092521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.092696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.092729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.092914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.092951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.093139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.093172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.093336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.093368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.093493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.093526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.093683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.093716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.093925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.093957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.094114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.094147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.094305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.094342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.094524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.094557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.094740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.094773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.094946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.094979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.095139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.095172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.095334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.095367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.095525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.095558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.095717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.095750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.095884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.095917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.096078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.096110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.096267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.096299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.096457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.096490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.096664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.096701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.096884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.097127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.097363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.097399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.097552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.097585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.097723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.097755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.097889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.097923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.098079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.098113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.098300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.098337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.098511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.098547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.098705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.098738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.098934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.098968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.099129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.099172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.099358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.099391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.814 [2024-07-13 05:26:06.099551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.814 [2024-07-13 05:26:06.099584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.814 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.099720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.099753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.099912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.099945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.100145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.100181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.100362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.100399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.100578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.100610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.100770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.100803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.100988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.101025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.101188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.101220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.101425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.101461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.101638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.101675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.101862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.101900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.102052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.102088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.102286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.102319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.102475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.102508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.102668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.102701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.102863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.102912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.103069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.103101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.103304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.103341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.103548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.103584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.103737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.103769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.103962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.103995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.104129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.104162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.104322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.104354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.104490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.104522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.104680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.104717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.104933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.104967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.105098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.105130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.105347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.105383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.105595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.105628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.105780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.105816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.106007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.106041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.106201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.106234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.106394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.106426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.106635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.106668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.106793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.106826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.107009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.107046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.107217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.107253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.107459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.107492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.815 qpair failed and we were unable to recover it. 00:36:59.815 [2024-07-13 05:26:06.107653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.815 [2024-07-13 05:26:06.107686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.107844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.107886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.108048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.108209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.108423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.108640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.108807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.108969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.109132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.109337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.109529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.109700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.109859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.109913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.110074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.110107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.110266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.110298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.110459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.110492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.110648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.110680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.110856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.110919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.111050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.111083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.111216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.111248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.111442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.111475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.111612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.111644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.111875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.111909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.112046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.112089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.112251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.112303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.112477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.112513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.112696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.112729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.112889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.112922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.113086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.113118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.113281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.113313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.113487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.113523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.113708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.113741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.113907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.113942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.114073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.114106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.114246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.114279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.816 [2024-07-13 05:26:06.114431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.816 [2024-07-13 05:26:06.114479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.816 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.114675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.114731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.114882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.114919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.115113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.115148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.115302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.115354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.115493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.115527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.115747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.115975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.116008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.116161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.116194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.116399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.116435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.116635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.116671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.116845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.116891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.117070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.117102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.117281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.117463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.117500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.117752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.117788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.117951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.117985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.118120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.118153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.118371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.118429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.118607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.118643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.118795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.118831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.119055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.119226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.119389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.119610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.119793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.119975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.120007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.120165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.120198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.120376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.120413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.120584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.120620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.120830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.120998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.121030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.121221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.121441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.121498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.121679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.121715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.121917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.121951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.122131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.122180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.122375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.122433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.817 [2024-07-13 05:26:06.122655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.817 [2024-07-13 05:26:06.122707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.817 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.122893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.122929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.123093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.123132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.123348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.123384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.123565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.123602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.123764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.123797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.123941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.123976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.124109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.124141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.124316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.124349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.124595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.124653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.124801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.124837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.125036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.125069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.125275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.125323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.125512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.125582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.125845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.125888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.126063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.126099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.126295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.126348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.126552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.126586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.126757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.126791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.126956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.126991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.127145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.127198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.127466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.127505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.127757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.127815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.128004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.128040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.128296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.128332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.128507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.128543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.128745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.128781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.128947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.128980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.129145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.129177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.129359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.129396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.129625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.129680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.129864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.129920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.130081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.130114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.130411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.130467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.130630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.130681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.130854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.130924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.131065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.131097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.131280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.131316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.131527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.131563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.131762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.131797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.131984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.132017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.818 qpair failed and we were unable to recover it. 00:36:59.818 [2024-07-13 05:26:06.132183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.818 [2024-07-13 05:26:06.132216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.132397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.132438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.132591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.132641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.132816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.132852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.133014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.133047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.133241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.133278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.133443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.133494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.133710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.133746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.133914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.133948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.134111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.134326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.134363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.134605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.134662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.134829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.134876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.135062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.135095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.135232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.135264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.135429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.135624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.135661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.135842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.135880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.136042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.136075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.136235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.136267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.136429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.136461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.136646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.136843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.136881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.137048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.137081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.137268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.137300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.137522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.137581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.137732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.137767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.137977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.138011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.138179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.138212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.138372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.138422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.138577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.138613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.138797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.138829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.139026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.139059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.139215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.139252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.139454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.819 [2024-07-13 05:26:06.139490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.819 qpair failed and we were unable to recover it. 00:36:59.819 [2024-07-13 05:26:06.139696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.139732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.139920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.139953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.140088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.140121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.140279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.140312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.140446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.140503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.140677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.140713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.140938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.140976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.141145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.141179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.141343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.141379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.141561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.141597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.141747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.141782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.141995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.142028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.142187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.142219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.142378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.142414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.142595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.142627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.142794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.142826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.142996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.143029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.143164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.143197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.143356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.143389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.143553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.143585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.143783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.143820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.143988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.144211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.144368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.144577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.144744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.144917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.144950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.145087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.145119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.145332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.145369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.145552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.145585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.145746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.145778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.145948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.145981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.146104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.146137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.146303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.146336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.146552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.146588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.146786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.146819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.146984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.147017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.147179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.147211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.147373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.147405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.147610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.147646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.820 qpair failed and we were unable to recover it. 00:36:59.820 [2024-07-13 05:26:06.147799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.820 [2024-07-13 05:26:06.147837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.148089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.148253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.148416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.148613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.148801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.148988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.149191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.149356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.149543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.149736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.149910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.149944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.150105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.150157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.150339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.150372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.150547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.150583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.150786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.150822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.151079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.151249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.151451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.151613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.151786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.151971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.152004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.152139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.152173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.152373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.152410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.152583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.152619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.152768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.152801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.152986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.153020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.153159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.153212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.153393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.153425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.153603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.153639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.153793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.153990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.154023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.154187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.154219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.154353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.154385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.154574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.154607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.154759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.155017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.155053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.155241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.155273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.155455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.155491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.155694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.155730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.155885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.821 [2024-07-13 05:26:06.155918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.821 qpair failed and we were unable to recover it. 00:36:59.821 [2024-07-13 05:26:06.156092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.156125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.156315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.156347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.156476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.156508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.156695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.156732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.156882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.156918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.157162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.157329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.157497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.157663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.157816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.157984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.158017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.158171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.158204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.158378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.158414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.158562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.158598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.158847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.158888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.159073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.159105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.159262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.159313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.159469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.159501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.159691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.159723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.159905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.159941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.160101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.160134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.160327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.160359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.160516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.160549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.160680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.160712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.160881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.160915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.161095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.161132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.161306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.161338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.161517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.161553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.161754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.161791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.161962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.161995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.162154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.162187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.162346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.162378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.162543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.162576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.162786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.162822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.162973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.163218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.163384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.163600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.163790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.163965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.163998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.822 qpair failed and we were unable to recover it. 00:36:59.822 [2024-07-13 05:26:06.164158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.822 [2024-07-13 05:26:06.164190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.164326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.164358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.164531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.164567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.164796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.164832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.165092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.165268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.165436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.165623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.165830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.165994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.166184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.166382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.166576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.166739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.166932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.166965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.167123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.167156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.167347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.167379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.167547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.167580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.167742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.167774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.167969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.168167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.168353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.168542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.168753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.168937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.168974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.169157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.169191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.169314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.169346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.169512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.169544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.169688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.169721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.169880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.169913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.170074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.170116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.170245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.170278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.170490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.170526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.170705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.170741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.170897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.170931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.823 qpair failed and we were unable to recover it. 00:36:59.823 [2024-07-13 05:26:06.171093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.823 [2024-07-13 05:26:06.171125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.171291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.171455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.171488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.171640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.171673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.171855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.171895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.172063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.172248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.172458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.172650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.172817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.172983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.173021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.173159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.173192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.173350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.173382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.173563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.173600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.173787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.173819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.173971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.174164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.174350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.174512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.174679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.174878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.174911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.175039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.175072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.175230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.175262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.175454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.175486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.175651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.175684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.175828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.175860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.176905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.176939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.177102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.177266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.177315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.177458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.177494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.177677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.177709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.177876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.177909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.178106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.178138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.178295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.178328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.178485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.178517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.178676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.824 [2024-07-13 05:26:06.178712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.824 qpair failed and we were unable to recover it. 00:36:59.824 [2024-07-13 05:26:06.178900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.178951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.179108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.179141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.179333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.179369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.179545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.179578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.179734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.179766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.179933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.179966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.180150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.180183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.180318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.180350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.180534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.180572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.180724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.180760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.180900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.180953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.181113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.181145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.181304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.181336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.181518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.181551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.181734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.181770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.181940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.181973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.182135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.182168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.182324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.182357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.182558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.182622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.182757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.182790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.182950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.182982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.183147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.183178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.183344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.183377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.183592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.183628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.183789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.183821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.184043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.184079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.184230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.184425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.184632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.184664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.184807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.184839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.185009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.185058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.185266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.185319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.185573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.185628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.185800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.185840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.186019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.186053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.186243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.186483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.186541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.186715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.186752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.186924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.186975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.187134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.187166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.187331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.825 [2024-07-13 05:26:06.187362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.825 qpair failed and we were unable to recover it. 00:36:59.825 [2024-07-13 05:26:06.187494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.187526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.187732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.187765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.187951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.187984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.188153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.188184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.188437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.188492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.188673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.188707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.188884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.188934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.189066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.189258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.189300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.189474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.189510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.189681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.189716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.189929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.189961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.190094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.190126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.190298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.190368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.190572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.190634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.190789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.190824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.190985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.191019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.191205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.191237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.191373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.191423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.191625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.191673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.191837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.191874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.192011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.192044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.192255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.192319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.192518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.192554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.192729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.192765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.192926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.192959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.193120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.193151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.193353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.193403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.193629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.193661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.193816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.193848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.194049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.194104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.194343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.194399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.194571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.194607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.194780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.194815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.194982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.195015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.195199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.195234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.195412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.195448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.195627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.195663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.195811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.195847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.196007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.196039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.826 [2024-07-13 05:26:06.196301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.826 [2024-07-13 05:26:06.196351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.826 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.196573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.196628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.196796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.196832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.197015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.197257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.197308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.197494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.197544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.197784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.197841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.198036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.198069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.198199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.198236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.198461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.198517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.198722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.198758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.198949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.198983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.199125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.199158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.199320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.199352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.199512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.199563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.199767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.199803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.199996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.200039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.200219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.200254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.200535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.200591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.200740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.200775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.200958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.200991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.201154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.201350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.201402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.201573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.201607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.201788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.201824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.201993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.202026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.202187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.202236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.202515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.202572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.202755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.202791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.202947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.202980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.203110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.203142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.203281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.203313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.203501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.203533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.203745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.203781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.203960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.203993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.204163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.204197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.827 qpair failed and we were unable to recover it. 00:36:59.827 [2024-07-13 05:26:06.204409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.827 [2024-07-13 05:26:06.204473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.204641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.204677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.204883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.204934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.205099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.205132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.205315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.205350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.205581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.205617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.205880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.205933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.206071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.206104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.206240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.206273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.206484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.206520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.206683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.206731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.206892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.206943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.207077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.207113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.207260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.207296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.207468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.207503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.207665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.207716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.207883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.207933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.208123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.208155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.208321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.208353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.208497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.208548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.208704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.208739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.208923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.208956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.209113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.209145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.209297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.209330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.209516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.209548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.209710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.209742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.209924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.209961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.210151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.210183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.210317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.210349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.210478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.210511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.210697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.210729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.210853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.210894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.211043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.211075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.211263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.211295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.211436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.211468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.211619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.211651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.211846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.212073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.212109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.212313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.212348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.212574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.212794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.828 [2024-07-13 05:26:06.212830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.828 qpair failed and we were unable to recover it. 00:36:59.828 [2024-07-13 05:26:06.213021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.213053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.213248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.213281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.213440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.213471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.213652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.213699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.213851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.213891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.214073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.214109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.214281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.214317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.214498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.214530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.214698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.214730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.214923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.214956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.215156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.215188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.215363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.215404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.215609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.215645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.215804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.215837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.215977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.216169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.216335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.216531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.216721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.216954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.216987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.217188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.217224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.217418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.217455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.217615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.217647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.217805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.217837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.218023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.218059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.218248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.218281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.218486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.218521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.218670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.218705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.218891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.218924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.219079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.219111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.219253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.219286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.219468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.219701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.219736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.219910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.219947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.220129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.220161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.220343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.220378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.220547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.220583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.220737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.220769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.220901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.221095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.829 [2024-07-13 05:26:06.221316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.829 [2024-07-13 05:26:06.221349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.829 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.221502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.221538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.221731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.221767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.221952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.221985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.222122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.222155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.222345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.222378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.222507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.222539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.222686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.222719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.222878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.222911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.223136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.223168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.223376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.223412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.223601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.223634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.223828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.223860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.224032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.224064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.224276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.224312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.224472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.224505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.224684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.224720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.224894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.224930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.225086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.225119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.225278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.225311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.225472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.225504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.225661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.225693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.225898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.225934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.226104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.226140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.226349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.226381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.226535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.226572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.226718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.226933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.226976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.227109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.227141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.227304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.227337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.227523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.227556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.227738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.227773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.227913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.227950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.228140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.228173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.228361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.228393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.228550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.228582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.228739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.228774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.228931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.228964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.229100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.229137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.229308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.229340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.229518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.229554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.830 [2024-07-13 05:26:06.229754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.830 [2024-07-13 05:26:06.229790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.830 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.229993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.230025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.230166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.230198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.230334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.230367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.230594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.230626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.230812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.231931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.231964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.232115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.232150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.232316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.232349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.232499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.232549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.232693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.232729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.232892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.232925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.233112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.233144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.233269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.233301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.233441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.233473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.233677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.233713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.233917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.233953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.234111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.234144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.234272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.234304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.234438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.234470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.234639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.234672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.234813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.234846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.235039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.235071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.235202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.235235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.235434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.235482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.235694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.235726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.235886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.235919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.236098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.236133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.236336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.236372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.236550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.236583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.236746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.831 [2024-07-13 05:26:06.236778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.831 qpair failed and we were unable to recover it. 00:36:59.831 [2024-07-13 05:26:06.236909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.236942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.237077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.237114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.237251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.237284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.237443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.237477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.237638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.237670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.237808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.237840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.238005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.238037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.238200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.238232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.238366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.238415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.238594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.238630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.238821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.238854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.239037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.239069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.239254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.239286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.239419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.239618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.239660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.239856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.239902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.240085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.240249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.240282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.240425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.240457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.240590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.240622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.240800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.240837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.241032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.241064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.241221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.241253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.241403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.241608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.241640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.241800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.241832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.242927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.242960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.243134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.243166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.243300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.243333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.243468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.243499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.243633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.243666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.243802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.243834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.244042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.244074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.244208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.244240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.244402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.244435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.244595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.832 [2024-07-13 05:26:06.244627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.832 qpair failed and we were unable to recover it. 00:36:59.832 [2024-07-13 05:26:06.244796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.244832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.245000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.245221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.245253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.245429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.245465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.245614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.245650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.245830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.245862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.246067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.246251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.246434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.246627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.246791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.246979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.247012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.247250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.247283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.247460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.247495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.247693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.247726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.247883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.247916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.248051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.248083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.248250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.248282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.248471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.248503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.248642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.248675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.248823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.248859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.249049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.249083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.249270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.249306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.249469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.249502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.249691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.249723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.249858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.249897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.250099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.250131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.250300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.250332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.250466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.250498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.250687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.250719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.250858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.250909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.251068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.251100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.251262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.251294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.251488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.251520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.251697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.251733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.251893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.251926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.252116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.252148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.252323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.252370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.252536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.252569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.252728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.252761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.833 [2024-07-13 05:26:06.252895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.833 [2024-07-13 05:26:06.252934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.833 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.253096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.253129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.253254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.253286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.253467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.253502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.253662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.253694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.253858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.253916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.254093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.254130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.254333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.254366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.254525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.254558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.254710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.254743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.254901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.255088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.255124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.255325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.255360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.255534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.255566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.255708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.255740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.255900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.255933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.256106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.256138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.256321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.256354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.256519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.256551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.256710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.256742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.256900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.256950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.257104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.257139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.257288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.257320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.257481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.257513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.257639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.257671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.257844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.257881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.258082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.258118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.258263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.258299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.258488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.258521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.258694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.258726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.258888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.258921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.259082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.259115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.259276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.259308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.259486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.259522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.259672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.259705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.259917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.259974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.260176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.260212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.260419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.260451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.260632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.260664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.260821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.260854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.261042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.261079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.834 [2024-07-13 05:26:06.261216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.834 [2024-07-13 05:26:06.261269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.834 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.261487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.261525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.261682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.261715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.261880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.261914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.262045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.262077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.262242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.262274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.262434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.262466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.262650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.262687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.262938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.262972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.263165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.263201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.263407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.263444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.263635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.263669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.263828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.263861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.264049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.264099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.264278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.264311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.264518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.264555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.264736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.264774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.264967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.265000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.265206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.265242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.265450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.265486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.265668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.265724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.265951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.265989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.266167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.266203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.266358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.835 [2024-07-13 05:26:06.266391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:59.835 qpair failed and we were unable to recover it. 00:36:59.835 [2024-07-13 05:26:06.266549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.266599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.266763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.266799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.267024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.267057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.267246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.267279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.267442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.267474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.267611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.267643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.267806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.267839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.268021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.268057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.268267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.268512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.268550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.268723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.268759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.268935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.269140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.269181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.269324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.269356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.269544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.269577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.269807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.269845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.270885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.270920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.271109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.271141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.271306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.271353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.271516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.271551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.271726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.271763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.271945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.271979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.272112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.272144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.272306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.272338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.272498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.272531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.272670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.272703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.272887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.272939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.273135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.273169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.273332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.273364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.273496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.273528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.273671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.273704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.273861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.273901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.274085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.274121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.123 [2024-07-13 05:26:06.274331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.123 [2024-07-13 05:26:06.274363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.123 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.274531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.274564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.274752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.274784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.274941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.274973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.275140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.275172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.275298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.275331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.275475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.275507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.275668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.275700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.275886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.275923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.276108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.276140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.276299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.276332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.276468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.276500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.276632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.276665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.276856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.276915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.277098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.277134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.277330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.277363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.277573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.277609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.277812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.277853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.278050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.278083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.278220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.278252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.278462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.278494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.278630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.278663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.278798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.278859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.279055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.279091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.279271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.279303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.279488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.279520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.279656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.279689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.279816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.279848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.280064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.280219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.280411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.280611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.280820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.280981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.281149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.281344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.281536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.281726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.281944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.281981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.282133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.282165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.282337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.124 [2024-07-13 05:26:06.282370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.124 qpair failed and we were unable to recover it. 00:37:00.124 [2024-07-13 05:26:06.282561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.282593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.282770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.282806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.282968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.283168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.283403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.283564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.283738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.283907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.283941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.284102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.284134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.284291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.284326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.284486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.284519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.284678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.284710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.284843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.284883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.285023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.285056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.285245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.285467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.285499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.285655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.285694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.285863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.285907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.286115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.286147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.286279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.286311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.286470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.286502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.286639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.286671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.286834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.286872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.287057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.287093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.287271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.287307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.287490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.287522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.287699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.287734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.287900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.287938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.288090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.288123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.288277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.288309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.288471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.288521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.288704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.288737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.288937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.288974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.289143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.289179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.289328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.289360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.289519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.289551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.289735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.289767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.289929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.289963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.290207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.290242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.290456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.290488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.125 [2024-07-13 05:26:06.290649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.125 [2024-07-13 05:26:06.290682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.125 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.290863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.290919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.291059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.291091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.291257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.291448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.291480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.291638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.291695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.291888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.292088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.292120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.292286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.292319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.292481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.292513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.292755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.292791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.292990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.293026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.293204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.293237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.293406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.293438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.293591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.293623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.293860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.293899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.294080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.294121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.294371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.294407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.294599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.294631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.294805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.294840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.295001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.295037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.295295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.295489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.295520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.295685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.295733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.295924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.295957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.296119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.296151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.296329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.296364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.296539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.296571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.296701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.296734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.296918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.296951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.297116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.297148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.297326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.297361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.297533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.297569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.297777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.297812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.298025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.298057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.298243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.298276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.298439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.298472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.298663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.298696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.298935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.298968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.299127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.299159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.299379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.126 qpair failed and we were unable to recover it. 00:37:00.126 [2024-07-13 05:26:06.299552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.126 [2024-07-13 05:26:06.299588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.299794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.300003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.300036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.300235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.300286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.300442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.300475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.300600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.300632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.300818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.300854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.301060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.301093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.301216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.301248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.301410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.301442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.301609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.301641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.301828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.301860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.302061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.302096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.302279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.302311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.302512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.302548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.302755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.302795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.302948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.302981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.303122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.303155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.303310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.303342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.303528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.303560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.303796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.303828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.303999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.304031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.304193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.304225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.304387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.304419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.304608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.304641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.304811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.304843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.305034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.305070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.305246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.305281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.305434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.305476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.305644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.305677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.305815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.305847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.306039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.306071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.306226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.306259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.306441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.306477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.127 qpair failed and we were unable to recover it. 00:37:00.127 [2024-07-13 05:26:06.306629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.127 [2024-07-13 05:26:06.306662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.306796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.306829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.306959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.306993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.307149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.307181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.307338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.307371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.307531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.307563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.307694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.307727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.307938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.307974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.308138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.308175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.308377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.308569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.308601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.308839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.308879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.309057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.309089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.309219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.309251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.309415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.309465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.309639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.309671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.309833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.309872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.310010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.310042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.310204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.310236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.310389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.310424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.310629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.310661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.310845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.310909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.311072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.311104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.311265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.311297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.311453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.311485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.311643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.311849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.311892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.312090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.312261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.312482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.312648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.312864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.312997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.313030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.313221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.313254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.313432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.313468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.313673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.313709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.313897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.313931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.314089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.314121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.314287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.314319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.314480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.314511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.314644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.128 [2024-07-13 05:26:06.314676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.128 qpair failed and we were unable to recover it. 00:37:00.128 [2024-07-13 05:26:06.314917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.314954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.315139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.315172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.315328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.315363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.315526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.315559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.315719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.315751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.315939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.315972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.316131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.316165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.316355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.316387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.316629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.316661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.316829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.316878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.317062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.317093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.317272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.317307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.317468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.317506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.317671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.317704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.317837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.317877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.318083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.318114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.318273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.318305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.318505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.318552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.318759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.318795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.318986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.319156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.319323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.319515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.319679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.319876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.319909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.320069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.320101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.320229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.320261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.320464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.320500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.320660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.320692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.320854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.320894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.321030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.321063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.321251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.321283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.321470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.321506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.321683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.321715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.321851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.321891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.322042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.322074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.322229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.322261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.322419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.322451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.322635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.322670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.322883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.322934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.129 [2024-07-13 05:26:06.323095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.129 [2024-07-13 05:26:06.323127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.129 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.323272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.323307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.323468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.323712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.323743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.323929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.323966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.324145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.324180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.324390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.324422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.324562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.324594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.324757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.324789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.324969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.325156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.325364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.325602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.325770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.325964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.325997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.326138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.326171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.326336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.326368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.326557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.326593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.326776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.326808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.326955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.326993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.327126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.327164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.327327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.327359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.327536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.327573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.327781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.327817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.327987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.328176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.328208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.328422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.328458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.328609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.328643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.328801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.328837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.329022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.329055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.329241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.329273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.329452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.329487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.329653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.329690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.329877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.329911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.330077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.330112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.330252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.330287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.330465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.330497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.330682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.330718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.330890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.330927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.331131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.331164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.331300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.130 [2024-07-13 05:26:06.331349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.130 qpair failed and we were unable to recover it. 00:37:00.130 [2024-07-13 05:26:06.331523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.331570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.331746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.331778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.331982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.332019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.332188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.332221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.332404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.332436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.332617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.332653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.332872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.332910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.333056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.333088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.333295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.333331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.333507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.333543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.333717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.333750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.333901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.333938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.334123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.334159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.334342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.334374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.334533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.334569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.334775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.334807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.334951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.334984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.335121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.335154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.335362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.335399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.335555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.335587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.335725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.335763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.335926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.335959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.336121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.336154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.336321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.336362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.336569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.336607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.336759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.336791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.336937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.336969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.337129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.337179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.337364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.337397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.337567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.337603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.337807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.337843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.338076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.338109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.338246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.338278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.338421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.338453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.338648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.131 [2024-07-13 05:26:06.338681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.131 qpair failed and we were unable to recover it. 00:37:00.131 [2024-07-13 05:26:06.338859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.338902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.339078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.339114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.339271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.339303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.339467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.339517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.339723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.339759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.340004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.340037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.340214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.340250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.340424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.340459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.340638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.340670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.340846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.340889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.341044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.341080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.341257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.341293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.341428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.341461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.341660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.341696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.341857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.341896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.342102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.342138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.342390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.342425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.342602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.342634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.342790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.342826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.342976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.343012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.343173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.343205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.343410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.343446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.343622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.343658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.343839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.343876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.344062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.344097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.344282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.344318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.344511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.344544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.344744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.344779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.344988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.345024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.345231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.345274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.345449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.345485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.345699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.345731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.345936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.345969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.346103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.346135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.346337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.346373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.346619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.346651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.346822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.346854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.347017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.132 [2024-07-13 05:26:06.347067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.132 qpair failed and we were unable to recover it. 00:37:00.132 [2024-07-13 05:26:06.347256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.347289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.347441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.347477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.347681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.347717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.347905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.347937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.348100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.348133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.348320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.348356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.348514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.348545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.348750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.348785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.348987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.349024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.349190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.349223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.349408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.349440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.349599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.349634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.349817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.349849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.349996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.350033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.350187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.350236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.350421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.350578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.350610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.350799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.350834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.351000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.351034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.351216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.351252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.351427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.351463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.351643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.351675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.351801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.351833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.352047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.352210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.352403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.352574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.352801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.352981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.353014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.353170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.353206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.353392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.353425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.133 [2024-07-13 05:26:06.353557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.133 [2024-07-13 05:26:06.353589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.133 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.353778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.353810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.353989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.354027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.354203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.354238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.354412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.354448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.354610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.354642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.354812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.354848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.354994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.355030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.355184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.355217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.355426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.355463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.355643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.355676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.355828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.355860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.356094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.356129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.356274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.356311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.356490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.356522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.356698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.356734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.356910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.356947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.357145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.357202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.357394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.357425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.357606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.357644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.357797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.357834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.358072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.358105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.358290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.358472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.358508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.358699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.358741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.358877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.358910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.359037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.359070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.359236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.359269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.359463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.359499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.359678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.359714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.359924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.359957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.360124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.360157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.360316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.134 [2024-07-13 05:26:06.360348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.134 qpair failed and we were unable to recover it. 00:37:00.134 [2024-07-13 05:26:06.360508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.360540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.360731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.360767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.360930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.360963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.361144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.361181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.361374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.361430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.361617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.361649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.361832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.361870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.362065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.362100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.362355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.362390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.362579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.362812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.362847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.363105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.363137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.363377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.363433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.363615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.363647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.363827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.363862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.364095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.364292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.364358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.364568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.364600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.364751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.364802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.365061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.365097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.365404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.365473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.365673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.365705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.365887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.365924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.366096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.366132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.366346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.366378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.366539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.366571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.366710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.366742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.366903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.366953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.367195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.367250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.367455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.367745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.367781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.135 qpair failed and we were unable to recover it. 00:37:00.135 [2024-07-13 05:26:06.367957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.135 [2024-07-13 05:26:06.367993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.368256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.368291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.368477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.368510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.368717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.368753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.368932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.368968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.369140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.369199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.369394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.369426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.369561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.369593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.369757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.369789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.370012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.370074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.370224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.370256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.370425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.370457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.370635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.370671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.370880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.370931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.371124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.371333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.371368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.371539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.371575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.371742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.371778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.371925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.372149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.372185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.372358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.372394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.372647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.372704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.372912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.372946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.373128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.373163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.373333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.373381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.136 [2024-07-13 05:26:06.373593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.136 [2024-07-13 05:26:06.373630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.136 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.373794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.373827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.373967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.374001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.374205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.374242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.374488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.374520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.374707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.374740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.374964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.374997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.375137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.375169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.375388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.375424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.375675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.375708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.375924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.375960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.376126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.376159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.376326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.376358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.376597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.376634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.376788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.376824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.377012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.377048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.377223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.377259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.377426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.377458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.377592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.377642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.377841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.377885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.378073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.378110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.378291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.378323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.378496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.378531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.378665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.378905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.378969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.379160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.379193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.379357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.379389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.379558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.379595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.379745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.379781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.379937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.379970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.380212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.380248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.380432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.380469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.380628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.380660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.137 qpair failed and we were unable to recover it. 00:37:00.137 [2024-07-13 05:26:06.380851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.137 [2024-07-13 05:26:06.380893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.381070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.381102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.381255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.381291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.381481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.381513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.381642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.381674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.381859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.381911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.382088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.382124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.382307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.382349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.382528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.382561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.382770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.382806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.382993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.383224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.383279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.383465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.383498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.383702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.383738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.383926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.383960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.384205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.384237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.384426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.384459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.384665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.384702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.384878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.384914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.385090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.385126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.385311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.385347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.385557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.385593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.385805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.385838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.385988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.386038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.386248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.386463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.386499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.386673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.386709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.386925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.386959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.387097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.387129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.387316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.387348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.387535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.387583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.138 [2024-07-13 05:26:06.387837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.138 [2024-07-13 05:26:06.387878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.138 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.388093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.388126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.388312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.388349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.388499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.388537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.388714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.388749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.388956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.388990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.389152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.389185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.389347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.389380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.389503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.389731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.389767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.389985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.390018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.390232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.390268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.390535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.390592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.390796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.390828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.390993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.391025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.391169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.391205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.391360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.391397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.391579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.391612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.391794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.391830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.392012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.392048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.392221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.392257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.392415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.392448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.392606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.392638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.392898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.392935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.393235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.393299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.393478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.393510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.393645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.393678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.393834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.393881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.394088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.394124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.394280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.394317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.394459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.394510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.394667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.394703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.394909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.394945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.395130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.395163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.395323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.395359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.395538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.395574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.395792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.395824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.395974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.396007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.139 [2024-07-13 05:26:06.396193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.139 [2024-07-13 05:26:06.396233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.139 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.396388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.396426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.396572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.396608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.396784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.396818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.396970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.397003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.397149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.397181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.397361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.397397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.397600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.397632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.397787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.397823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.398016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.398050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.398289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.398345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.398531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.398564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.398708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.398741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.398886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.398919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.399081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.399114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.399268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.399301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.399482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.399520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.399675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.399712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.399860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.399904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.400074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.400107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.400290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.400327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.400577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.400614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.400775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.400813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.401012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.401045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.401231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.401263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.401455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.401525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.401729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.401765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.401924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.401957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.402095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.402128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.402289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.402322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.402566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.402625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.402804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.402841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.403075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.403112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.403316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.403351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.403579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.403638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.403824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.403857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.404067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.404103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.404256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.404293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.404496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.404554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.404705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.404739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.404951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.404988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.140 qpair failed and we were unable to recover it. 00:37:00.140 [2024-07-13 05:26:06.405189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.140 [2024-07-13 05:26:06.405225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.405443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.405475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.405668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.405702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.405856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.405898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.406078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.406297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.406334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.406518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.406551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.406728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.406764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.406940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.406979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.407182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.407238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.407444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.407478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.407615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.407647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.407811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.407843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.408060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.408096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.408253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.408286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.408469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.408505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.408655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.408692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.408874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.408911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.409094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.409269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.409302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.409460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.409492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.409652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.409685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.409814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.409848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.410024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.410057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.410223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.410260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.410430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.410471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.410654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.410687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.410834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.410885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.411069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.411105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.411258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.411294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.411503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.411539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.411721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.411757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.411973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.412006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.412225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.412314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.412486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.141 [2024-07-13 05:26:06.412520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.141 qpair failed and we were unable to recover it. 00:37:00.141 [2024-07-13 05:26:06.412703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.412736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.412898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.412934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.413112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.413148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.413356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.413388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.413615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.413652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.413863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.413901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.414041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.414091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.414279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.414312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.414464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.414496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.414691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.414727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.414974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.415177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.415342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.415535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.415745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.415962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.415995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.416129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.416161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.416297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.416329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.416510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.416546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.416719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.416751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.416903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.416940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.417115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.417148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.417336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.417369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.417564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.417596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.417746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.417778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.417998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.418030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.418203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.418239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.418451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.418483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.418664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.418699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.418912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.418944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.419230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.419288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.419495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.419527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.419726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.419762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.142 [2024-07-13 05:26:06.419914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.142 [2024-07-13 05:26:06.419951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.142 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.420130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.420166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.420374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.420413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.420575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.420608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.420851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.420894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.421080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.421116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.421273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.421305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.421484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.421520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.421720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.421756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.421896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.421932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.422119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.422151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.422289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.422321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.422446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.422478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.422670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.422705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.422889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.422922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.423078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.423128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.423280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.423317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.423505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.423537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.423663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.423696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.423838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.423877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.424047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.424079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.424330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.424389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.424581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.424613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.424789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.424824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.425012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.425049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.425243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.425275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.425409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.425442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.425685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.425721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.425899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.425936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.426108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.426144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.426327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.426361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.426546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.426581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.426824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.426856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.427046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.143 [2024-07-13 05:26:06.427082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.143 qpair failed and we were unable to recover it. 00:37:00.143 [2024-07-13 05:26:06.427282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.427314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.427485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.427521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.427720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.427756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.427894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.427942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.428104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.428136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.428339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.428375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.428555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.428592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.428804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.428837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.429082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.429119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.429361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.429397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.429568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.429616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.429832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.429870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.430056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.430089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.430242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.430278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.430456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.430492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.430634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.430670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.430816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.431003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.431039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.431238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.431274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.431420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.431457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.431659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.431691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.431878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.431919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.432054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.432105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.432305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.432342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.432517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.432554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.432726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.432761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.432923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.432959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.433132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.433168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.433382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.433415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.433623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.433658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.144 qpair failed and we were unable to recover it. 00:37:00.144 [2024-07-13 05:26:06.433812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.144 [2024-07-13 05:26:06.433847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.434041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.434076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.434240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.434272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.434514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.434546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.434747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.434783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.434944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.434981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.435167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.435199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.435356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.435407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.435588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.435624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.435796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.435831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.436048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.436081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.436248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.436284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.436458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.436515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.436713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.436749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.436934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.436967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.437104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.437136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.437302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.437334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.437530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.437566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.437716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.437753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.437945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.437983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.438182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.438217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.438427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.438463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.438653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.438685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.438875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.438933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.439116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.439151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.439336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.439395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.439583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.439615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.439794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.439829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.439997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.440030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.440268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.440304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.440462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.440495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.145 [2024-07-13 05:26:06.440696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.145 [2024-07-13 05:26:06.440732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.145 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.440892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.440929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.441078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.441113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.441301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.441332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.441475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.441524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.441699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.441734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.441927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.441961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.442120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.442152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.442352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.442388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.442544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.442580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.442785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.442821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.442993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.443026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.443203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.443239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.443417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.443466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.443632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.443668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.443829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.443861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.444001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.444032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.444207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.444241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.444425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.444476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.444655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.444687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.444875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.444912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.445124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.445156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.445395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.445605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.445637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.445849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.446057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.446093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.446241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.446277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.446432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.446464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.446599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.446631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.146 [2024-07-13 05:26:06.446820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.146 [2024-07-13 05:26:06.446856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.146 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.447066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.447098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.447238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.447271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.447431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.447464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.447648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.447683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.447834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.447878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.448044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.448076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.448206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.448257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.448396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.448432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.448591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.448626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.448820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.448856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.449089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.449122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.449333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.449367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.449506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.449538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.449764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.449796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.449951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.449988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.450188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.450243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.450427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.450459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.450641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.450673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.450849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.450891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.451060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.451096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.451266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.451321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.451504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.451536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.451744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.451781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.451961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.451997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.452136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.452177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.452330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.452363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.452567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.452603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.452782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.452818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.452984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.453176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.453338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.453554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.453760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.453956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.453989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.454128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.454160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.454322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.454354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.454527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.454563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.454768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.454800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.454990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.455026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.455201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.147 [2024-07-13 05:26:06.455236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.147 qpair failed and we were unable to recover it. 00:37:00.147 [2024-07-13 05:26:06.455414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.455466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.455672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.455704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.455926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.455963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.456225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.456261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.456467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.456522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.456744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.456775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.456943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.456980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.457169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.457211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.457479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.457534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.457686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.457720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.457916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.457952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.458127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.458163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.458338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.458374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.458560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.458593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.458764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.458938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.458970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.459210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.459265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.459453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.459637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.459673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.459857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.459895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.460067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.460099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.460291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.460322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.460447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.460498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.460680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.460732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.460910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.460969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.461136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.461169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.461316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.461351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.461501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.461536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.461692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.461728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.461892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.461930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.462176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.462211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.462417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.462453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.462595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.462631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.462777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.462809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.462952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.462985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.463145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.463194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.463414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.463469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.463646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.463678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.463818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.463877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.464050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.464082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.464294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.148 [2024-07-13 05:26:06.464330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.148 qpair failed and we were unable to recover it. 00:37:00.148 [2024-07-13 05:26:06.464513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.464546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.464716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.464751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.464896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.464933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.465108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.465143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.465312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.465344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.465494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.465545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.465725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.465762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.465961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.465993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.466142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.466174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.466351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.466386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.466534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.466571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.466749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.466784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.466980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.467013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.467257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.467293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.467465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.467500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.467648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.467685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.467874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.467907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.468061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.468097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.468266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.468301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.468514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.468751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.468783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.469005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.469180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.469215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.469417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.469477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.469725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.469951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.469988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.470161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.470195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.470326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.470358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.470595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.470627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.470818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.470874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.471067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.471109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.471386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.471440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.471620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.471652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.471790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.471822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.471986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.472019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.472227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.472260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.472389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.472421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.472615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.472652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.472830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.472884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.473046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.149 [2024-07-13 05:26:06.473081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.149 qpair failed and we were unable to recover it. 00:37:00.149 [2024-07-13 05:26:06.473241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.473275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.473440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.473472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.473621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.473653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.473808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.473843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.474011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.474043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.474291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.474323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.474505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.474540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.474742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.474778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.474948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.474982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.475159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.475194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.475376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.475412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.475605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.475846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.475885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.476072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.476236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.476428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.476589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.476766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.476977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.477211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.477393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.477552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.477721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.477941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.477982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.478176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.478208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.478393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.478429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.478620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.478652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.478787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.478819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.478953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.478986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.479193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.479228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.479399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.479435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.479585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.479620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.479802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.479835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.480005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.480038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.480191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.480226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.480365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.480401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.480584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.480616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.480806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.480842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.481057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.481089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.481281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.481317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.150 [2024-07-13 05:26:06.481524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.150 [2024-07-13 05:26:06.481556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.150 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.481708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.481743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.481929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.481966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.482158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.482190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.482322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.482355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.482510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.482543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.482714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.482749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.482924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.482961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.483121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.483153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.483339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.483375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.483561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.483597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.483769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.483804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.483970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.484142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.484337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.484571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.484764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.484956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.484989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.485148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.485180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.485340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.485389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.485594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.485626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.485800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.485834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.486043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.486075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.486220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.486256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.486394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.486427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.486608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.486640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.486793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.486828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.487021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.487054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.487193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.487225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.487356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.487388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.487532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.487564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.487768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.487802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.488007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.488040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.488218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.488252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.488395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.488428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.151 [2024-07-13 05:26:06.488566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.151 [2024-07-13 05:26:06.488599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.151 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.488811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.488845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.489038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.489070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.489252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.489285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.489493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.489525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.489676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.489897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.489930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.490151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.490299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.490332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.490560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.490593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.490759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.490794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.490954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.490993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.491182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.491215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.491398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.491430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.491587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.491620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.491769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.491802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.492006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.492038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.492292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.492324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.492490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.492523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.492667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.492699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.492853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.492893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.493049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.493081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.493242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.493274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.493404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.493624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.493656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.493808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.493840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.494043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.494075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.494212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.494244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.494401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.494438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.494624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.494656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.494791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.494825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.495068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.495101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.495271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.495303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.495463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.495495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.495658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.495690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.495820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.495852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.496036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.496068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.496225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.496257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.496443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.496475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.496630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.496662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.496819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.152 [2024-07-13 05:26:06.496852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.152 qpair failed and we were unable to recover it. 00:37:00.152 [2024-07-13 05:26:06.497020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.497052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.497199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.497233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.497393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.497436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.497676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.497708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.497847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.497886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.498082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.498305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.498496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.498664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.498819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.498987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.499209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.499373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.499589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.499760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.499937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.500140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.500172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.500308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.500340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.500469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.500501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.500637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.500670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.500863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.500901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.501942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.501974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.502112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.502148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.502304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.502335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.502467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.502499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.502629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.502662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.502825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.502857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.503016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.503048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.503213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.503376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.503407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.503564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.503596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.503789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.503821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.504003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.504036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.504195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.504228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.504415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.504447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.153 [2024-07-13 05:26:06.504605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.153 [2024-07-13 05:26:06.504637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.153 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.504811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.504844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.504982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.505171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.505385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.505604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.505759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.505922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.505954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.506100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.506133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.506299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.506331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.506489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.506521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.506659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.506692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.506855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.506894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.507041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.507073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.507223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.507255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.507391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.507423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.507599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.507649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.507821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.507857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.508038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.508073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.508240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.508275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.508444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.508478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.508638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.508672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.508809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.508849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.509020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.509053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.509244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.509278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.509440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.509472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.509635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.509669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.509806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.509842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.510012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.510044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.510204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.510237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.510376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.510408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.510475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:37:00.154 [2024-07-13 05:26:06.510716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.510764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.510940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.510987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.511156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.511190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.511356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.511390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.511552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.511585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.511734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.511767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.154 [2024-07-13 05:26:06.511906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.154 [2024-07-13 05:26:06.511940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.154 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.512082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.512114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.512242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.512274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.512428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.512474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.512633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.512666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.512824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.512856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.513043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.513235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.513398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.513608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.513821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.513977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.514014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.514156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.514190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.514366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.514400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.514570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.514614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.514763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.514796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.515076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.515280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.515469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.515633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.515830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.515989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.516175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.516337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.516524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.516742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.516937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.516970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.517109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.517142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.517331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.517363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.517523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.517555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.517693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.517725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.517861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.517898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.518062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.518094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.518252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.518284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.518419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.155 qpair failed and we were unable to recover it. 00:37:00.155 [2024-07-13 05:26:06.518637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.155 [2024-07-13 05:26:06.518669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.518804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.518836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.519055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.519215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.519424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.519596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.519798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.519972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.520142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.520365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.520530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.520696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.520882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.520923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.521121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.521154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.521287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.521320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.521450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.521482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.521642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.521674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.521840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.521879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.522096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.522258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.522446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.522634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.522812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.522986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.523181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.523343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.523532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.523731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.523939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.523997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.524166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.524202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.524376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.524411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.524574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.524607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.524799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.524833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.525009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.525043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.525209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.525242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.525438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.525633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.525668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.525859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.525902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.526054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.526086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.526226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.526259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.526386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.156 [2024-07-13 05:26:06.526418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.156 qpair failed and we were unable to recover it. 00:37:00.156 [2024-07-13 05:26:06.526591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.526623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.526782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.526814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.527055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.527289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.527454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.527624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.527824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.527977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.528216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.528385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.528550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.528729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.528908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.528954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.529089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.529121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.529251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.529283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.529442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.529474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.529638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.529671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.529836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.529879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.530070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.530104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.530267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.530300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.530443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.530478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.530631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.530665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.530858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.530897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.531951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.531984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.532121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.532155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.532285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.532317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.532477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.532509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.532678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.532714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.532882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.532916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.533165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.533199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.533362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.533396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.533560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.533593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.533760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.533794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.533961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.533995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.534136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.534168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.534322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.157 [2024-07-13 05:26:06.534355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.157 qpair failed and we were unable to recover it. 00:37:00.157 [2024-07-13 05:26:06.534490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.534522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.534668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.534701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.534861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.534909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.535091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.535126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.535268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.535303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.535497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.535530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.535777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.535815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.535988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.536023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.536162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.536197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.536335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.536510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.536544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.536786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.536819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.537011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.537045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.537185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.537220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.537386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.537612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.537646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.537808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.537840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.538041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.538206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.538423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.538624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.538809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.538977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.539142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.539332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.539495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.539658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.539821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.539852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.540952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.540985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.541119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.541152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.541316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.541349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.541506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.541538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.541695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.541728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.541864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.541904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.542068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.542100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.158 [2024-07-13 05:26:06.542231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.158 [2024-07-13 05:26:06.542263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.158 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.542429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.542461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.542592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.542626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.542756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.542789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.542954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.542988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.543144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.543177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.543302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.543338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.543499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.543531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.543684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.543716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.543847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.543888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.544084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.544277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.544448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.544634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.544807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.544978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.545138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.545329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.545515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.545685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.545889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.545922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.546053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.546085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.546246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.546278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.546467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.546499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.546653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.546851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.546891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.547247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.547444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.547627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.547808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.547968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.548131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.548346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.548532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.548728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.159 qpair failed and we were unable to recover it. 00:37:00.159 [2024-07-13 05:26:06.548930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.159 [2024-07-13 05:26:06.548962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.549156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.549189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.549350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.549383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.549548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.549579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.549710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.549743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.549879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.549913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.550066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.550098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.550232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.550265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.550424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.550457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.550640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.550671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.550815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.550851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.551043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.551092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.551277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.551328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.551502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.551545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.551758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.551799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.551943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.552142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.552177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.552338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.552371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.552527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.552559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.552716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.552748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.552886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.552919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.553081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.553271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.553449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.553611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.553826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.553972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.554186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.554384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.554553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.554742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.554919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.554951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.555138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.555171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.555330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.555362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.555490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.555522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.555649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.555681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.555845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.555893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.556033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.556225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.556257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.556384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.556416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.556584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.160 [2024-07-13 05:26:06.556616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.160 qpair failed and we were unable to recover it. 00:37:00.160 [2024-07-13 05:26:06.556800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.556832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.556999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.557189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.557375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.557572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.557741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.557934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.557967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.558142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.558174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.558306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.558338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.558480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.558516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.558683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.558714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.558855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.558895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.559054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.559096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.559248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.559279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.559464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.559496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.559670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.559701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.559857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.559896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.560053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.560086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.560248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.560280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.560452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.560484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.560615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.560648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.560834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.560872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.561943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.561976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.562113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.562145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.562302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.562334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.562494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.562526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.562687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.562719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.562894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.562927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.563063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.563096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.563254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.563286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.563446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.563477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.563626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.563674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.563880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.563927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.564094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.564128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.564288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.564322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.564492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.161 [2024-07-13 05:26:06.564528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.161 qpair failed and we were unable to recover it. 00:37:00.161 [2024-07-13 05:26:06.564740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.564780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.564942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.564976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.565113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.565146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.565302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.565334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.565482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.565515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.565673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.565705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.565841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.565885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.566064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.566096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.566250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.566287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.566451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.566483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.566657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.566689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.566853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.566896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.567062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.567252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.567419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.567607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.567816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.567987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.568154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.568319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.568507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.568698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.568895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.568928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.569091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.569288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.569494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.569653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.569848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.569982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.570173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.570329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.570523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.570687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.570886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.570919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.571077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.571110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.571262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.571311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.571508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.571544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.571686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.571719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.162 qpair failed and we were unable to recover it. 00:37:00.162 [2024-07-13 05:26:06.571853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.162 [2024-07-13 05:26:06.571897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.572045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.572078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.572269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.572302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.572438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.572471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.572621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.572654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.572789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.572821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.573952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.573985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.574146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.574332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.574363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.574493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.574525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.574683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.574715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.574843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.574883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.575963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.575996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.576126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.576159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.576298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.576330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.576470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.576503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.576664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.576696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.576884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.576932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.577104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.577140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.577310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.577343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.577476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.163 [2024-07-13 05:26:06.577509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.163 qpair failed and we were unable to recover it. 00:37:00.163 [2024-07-13 05:26:06.577671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.577704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.577875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.577909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.578069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.578102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.578264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.578296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.578435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.578468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.578606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.578640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.578799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.578848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.579019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.579067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.579214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.579250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.579421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.579455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.579609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.579644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.579810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.579856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.580036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.580070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.580221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.580254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.580418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.580452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.580610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.580643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.580805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.580838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.581037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.581247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.581455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.581628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.581827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.581970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.582003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.582169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.582202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.582337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.582371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.164 [2024-07-13 05:26:06.582555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.164 [2024-07-13 05:26:06.582588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.164 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.582749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.582782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.582918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.582952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.583093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.583346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.583503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.583535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.583671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.583705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.583874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.583908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.584095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.584142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.584284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.584318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.584487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.584519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.584652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.584685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.584881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.584915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.585942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.585976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.586146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.586179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.586339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.586372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.586557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.586605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.586778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.586813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.586964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.587000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.587139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.587172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.587394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.587428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.587611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.587645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.587810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.587844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.588018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.588052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.588210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.588257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.588450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.588484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.588649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.588682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.588812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.588850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.589009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.165 [2024-07-13 05:26:06.589047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.165 qpair failed and we were unable to recover it. 00:37:00.165 [2024-07-13 05:26:06.589226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.589264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.589397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.589429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.589596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.589628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.589800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.589831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.590933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.590971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.591127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.591160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.591293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.591325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.591525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.591568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.591750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.591783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.591943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.591977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.592105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.592138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.592303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.592335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.592471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.592503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.592640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.592828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.592860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.593088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.593272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.593476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.593677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.593835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.593976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.594010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.594168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.594201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.594389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.594427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.166 [2024-07-13 05:26:06.594570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.166 [2024-07-13 05:26:06.594604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.166 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.594755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.594791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.594983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.595018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.595182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.595216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.595344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.595377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.595537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.595569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.595723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.595771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.595968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.596016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.596158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.596194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.596391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.596425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.596563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.596596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.596754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.596787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.596972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.597144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.597339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.597561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.597738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.597927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.597961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.598129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.598163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.598346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.598379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.598507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.598540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.598680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.598890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.598939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.599152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.599188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.599353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.599388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.599549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.599583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.599752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.599786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.599929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.599964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.600154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.600188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.600320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.600354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.450 qpair failed and we were unable to recover it. 00:37:00.450 [2024-07-13 05:26:06.600490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.450 [2024-07-13 05:26:06.600525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.600710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.600743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.600907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.600940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.601100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.601133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.601295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.601328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.601453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.601486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.601672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.601704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.601864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.601904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.602065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.602098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.602258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.602295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.602457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.602490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.602628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.602661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.602820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.602852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.603037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.603085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.603259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.603296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.603461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.603495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.603641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.603675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.603812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.603847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.604956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.604989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.605129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.605162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.605345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.605378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.605510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.605543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.605737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.605772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.605931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.605966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.606108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.606143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.606291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.606324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.606526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.606560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.606724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.606902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.606937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.607123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.607171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.607342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.607377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.607525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.607558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.607727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.607895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.607928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.608109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.608141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.451 [2024-07-13 05:26:06.608337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.451 qpair failed and we were unable to recover it. 00:37:00.451 [2024-07-13 05:26:06.608501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.608532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.608669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.608738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.608902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.608935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.609095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.609127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.609290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.609322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.609484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.609516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.609653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.609685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.609824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.609885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.610049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.610103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.610275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.610311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.610473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.610508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.610710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.610744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.610912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.610946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.611107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.611141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.611301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.611335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.611566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.611600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.611766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.611811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.611992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.612165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.612363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.612531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.612729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.612947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.612995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.613139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.613174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.613322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.613357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.613496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.613531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.613725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.613758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.613947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.613982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.614124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.614159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.614322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.614355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.614495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.614686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.614719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.614880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.614914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.615075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.615107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.615240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.615273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.615437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.615470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.615629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.615662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.615838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.615892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.616059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.616106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.616281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.616315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.616478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.452 [2024-07-13 05:26:06.616511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.452 qpair failed and we were unable to recover it. 00:37:00.452 [2024-07-13 05:26:06.616706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.616738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.616884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.616917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.617082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.617130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.617325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.617361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.617504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.617537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.617705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.617739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.617923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.617971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.618116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.618157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.618357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.618391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.618577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.618611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.618777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.618811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.619048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.619236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.619427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.619623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.619809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.619975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.620171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.620368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.620536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.620725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.620922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.620970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.621178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.621226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.621384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.621421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.621587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.621623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.621801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.621836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.622036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.622070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.622233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.622267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.622423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.622457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.622655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.622692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.622886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.622932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.623088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.623124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.623292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.623325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.623494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.623527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.623697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.623730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.623906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.623940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.624100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.624132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.624261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.624294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.624481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.624514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.624652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.624685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.453 qpair failed and we were unable to recover it. 00:37:00.453 [2024-07-13 05:26:06.624813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.453 [2024-07-13 05:26:06.624845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.625963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.625998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.626136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.626178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.626345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.626379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.626567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.626600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.626746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.626794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.626965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.627156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.627348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.627543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.627702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.627904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.627951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.628095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.628130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.628293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.628327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.628483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.628516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.628665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.628698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.628873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.628907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.629057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.629090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.629227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.629260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.629422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.629455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.629600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.629635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.629777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.629816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.630939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.630986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.631127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.454 [2024-07-13 05:26:06.631161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.454 qpair failed and we were unable to recover it. 00:37:00.454 [2024-07-13 05:26:06.631329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.631362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.631487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.631519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.631677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.631710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.631884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.631918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.632085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.632118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.632304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.632337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.632474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.632507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.632682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.632717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.632891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.632926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.633062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.633095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.633261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.633295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.633457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.633490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.633673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.633720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.633918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.633960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.634125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.634158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.634325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.634359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.634522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.634555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.634725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.634758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.634923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.634958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.635095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.635129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.635289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.635323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.635457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.635492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.635656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.635690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.635826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.635859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.636042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.636075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.636213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.636246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.636410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.636443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.636635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.636668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.636815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.636849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.637016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.637059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.637217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.637249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.637408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.637441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.637586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.637619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.637797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.637845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.638021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.638058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.638222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.638256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.638424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.638459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.638649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.638683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.638842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.638885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.639048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.639081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.455 [2024-07-13 05:26:06.639235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.455 [2024-07-13 05:26:06.639269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.455 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.639430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.639464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.639627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.639661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.639801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.639836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.640043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.640214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.640439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.640635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.640830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.640994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.641184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.641348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.641548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.641744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.641927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.641962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.642125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.642158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.642328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.642363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.642550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.642751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.642789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.642958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.642992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.643133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.643168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.643303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.643336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.643476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.643509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.643673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.643706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.643846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.644071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.644104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.644261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.644293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.644435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.644468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.644630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.644663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.644833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.644874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.645054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.645100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.645274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.645309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.645471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.645504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.645643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.645677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.645873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.645907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.646074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.646108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.646275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.646309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.646465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.646498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.646642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.646675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.646850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.646906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.647091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.647128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.456 [2024-07-13 05:26:06.647318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.456 [2024-07-13 05:26:06.647353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.456 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.647528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.647563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.647726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.647759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.647918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.647952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.648118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.648153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.648324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.648358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.648494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.648528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.648668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.648703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.648894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.648927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.649133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.649180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.649325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.649360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.649495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.649528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.649691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.649729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.649894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.649942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.650117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.650153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.650366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.650402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.650562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.650607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.650748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.650782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.650949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.650985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.651148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.651184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.651315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.651348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.651476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.651508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.651645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.651833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.651874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.652038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.652072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.652270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.652304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.652444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.652479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.652643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.652676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.652890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.652938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.653111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.653148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.653311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.653345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.653475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.653508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.653668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.653701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.653892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.653940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.654120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.654155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.654291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.654325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.654487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.654520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.654656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.654688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.654854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.654897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.655055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.655102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.655271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.655307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.457 qpair failed and we were unable to recover it. 00:37:00.457 [2024-07-13 05:26:06.655504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.457 [2024-07-13 05:26:06.655538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.655726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.655759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.655949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.655983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.656171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.656204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.656371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.656405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.656571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.656605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.656771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.656804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.656966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.657169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.657371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.657536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.657736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.657935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.657969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.658127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.658160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.658293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.658326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.658466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.658500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.658656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.658688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.658817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.659007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.659040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.659227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.659261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.659424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.659457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.659624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.659663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.659808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.659841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.660037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.660070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.660210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.660243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.660424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.660457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.660643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.660676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.660848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.660889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.661017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.661050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.661206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.661239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.661404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.661437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.661618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.661651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.661779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.661812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.662004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.662038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.662240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.662288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.662435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.458 [2024-07-13 05:26:06.662483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.458 qpair failed and we were unable to recover it. 00:37:00.458 [2024-07-13 05:26:06.662614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.662647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.662782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.662814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.662983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.663179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.663403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.663574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.663750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.663943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.663977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.664145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.664193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.664359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.664394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.664530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.664563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.664728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.664761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.664919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.664953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.665089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.665122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.665314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.665347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.665489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.665527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.665688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.665721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.665882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.665915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.666088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.666284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.666479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.666671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.666838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.666998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.667210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.667377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.667547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.667740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.667936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.667970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.668140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.668173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.668353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.668386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.668545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.668588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.668734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.668767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.668944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.668978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.669115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.669148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.669293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.669326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.669454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.669487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.669642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.669676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.669839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.669877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.670016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.670049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.459 [2024-07-13 05:26:06.670227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.459 [2024-07-13 05:26:06.670275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.459 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.670465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.670500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.670690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.670738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.670889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.670923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.671087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.671120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.671262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.671295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.671431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.671463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.671621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.671654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.671786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.671819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.672934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.672968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.673098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.673136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.673309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.673343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.673502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.673535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.673695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.673728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.673859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.673898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.674036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.674070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.674251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.674284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.674472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.674505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.674695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.674728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.674880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.674929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.675113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.675159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.675332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.675367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.675508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.675541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.675704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.675736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.675886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.675919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.676080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.676112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.676274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.676306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.676460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.676492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.676654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.676686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.676839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.676881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.677055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.677103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.677287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.677334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.677482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.677518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.677685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.677719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.677888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.677922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.678087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.678120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.460 [2024-07-13 05:26:06.678247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.460 [2024-07-13 05:26:06.678280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.460 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.678453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.678502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.678675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.678710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.678913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.678961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.679107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.679142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.679309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.679342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.679503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.679536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.679674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.679710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.679903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.679938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.680097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.680130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.680260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.680293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.680453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.680486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.680646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.680680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.680828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.680883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.681080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.681296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.681481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.681656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.681846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.681991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.682025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.682188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.682221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.682352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.682386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.682567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.682614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.682764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.682800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.682953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.683000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.683150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.683184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.683325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.683359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.683526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.683560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.683756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.683791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.684017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.684068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.684228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.684275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.684417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.684452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.684641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.684675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.684801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.684834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.685018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.685052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.685193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.685227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.685366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.685400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.685561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.685594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.685749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.686008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.686225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.686279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.461 qpair failed and we were unable to recover it. 00:37:00.461 [2024-07-13 05:26:06.686438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.461 [2024-07-13 05:26:06.686473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.686617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.686650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.686809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.686841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.687058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.687236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.687432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.687627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.687801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.687970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.688145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.688313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.688505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.688669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.688838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.688884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.689090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.689270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.689462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.689624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.689841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.689974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.690141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.690360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.690530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.690702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.690892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.690926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.691064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.691098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.691272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.691305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.691480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.691513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.691669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.691701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.691843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.691904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.692072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.692119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.692296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.692331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.692518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.692551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.692691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.692724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.692857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.692903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.693045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.693080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.693287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.693335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.693500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.693537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.693706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.693745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.693911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.693946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.694104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.462 [2024-07-13 05:26:06.694139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.462 qpair failed and we were unable to recover it. 00:37:00.462 [2024-07-13 05:26:06.694278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.694312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.694489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.694695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.694729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.694872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.694907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.695047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.695080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.695248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.695282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.695444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.695478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.695620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.695655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.695814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.695847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.696063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.696255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.696442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.696639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.696827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.696995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.697029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.697219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.697253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.697413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.697625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.697659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.697845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.697885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.698918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.698965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.699113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.699149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.699317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.699351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.699511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.699545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.699677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.699710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.699902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.699937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.700073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.700107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.700291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.700323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.700465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.700497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.700669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.700702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.700876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.463 [2024-07-13 05:26:06.700913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.463 qpair failed and we were unable to recover it. 00:37:00.463 [2024-07-13 05:26:06.701072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.701104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.701273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.701306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.701442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.701475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.701612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.701656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.701792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.701829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.702005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.702038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.702174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.702207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.702393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.702425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.702584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.702616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.702772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.702804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.703016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.703065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.703202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.703237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.703420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.703455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.703634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.703668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.703843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.703904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.704076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.704111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.704279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.704312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.704466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.704500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.704646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.704679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.704817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.704851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.705936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.705970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.706107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.706151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.706288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.706322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.706479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.706652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.706686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.706857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.706898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.707060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.707094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.707229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.707262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.707424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.707456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.707639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.707678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.707812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.707844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.708037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.708070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.708229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.708263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.708403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.464 [2024-07-13 05:26:06.708436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.464 qpair failed and we were unable to recover it. 00:37:00.464 [2024-07-13 05:26:06.708582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.708615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.708741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.708774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.708935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.708968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.709099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.709132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.709288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.709321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.709480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.709518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.709657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.709690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.709877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.709910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.710084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.710131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.710275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.710312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.710490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.710526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.710739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.710774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.710946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.710992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.711137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.711192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.711390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.711423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.711568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.711602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.711744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.711778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.711966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.712136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.712333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.712534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.712726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.712923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.712957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.713120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.713153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.713301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.713334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.713516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.713549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.713687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.713720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.713878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.713912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.714075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.714109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.714241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.714274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.714462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.714494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.714632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.714666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.714856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.714914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.715051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.715084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.715250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.715283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.715441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.715473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.715630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.715663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.715824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.715857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.465 [2024-07-13 05:26:06.716029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.465 [2024-07-13 05:26:06.716062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.465 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.716199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.716232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.716363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.716395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.716583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.716616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.716776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.716809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.716977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.717157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.717326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.717501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.717692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.717890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.717932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.718090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.718255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.718289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.718448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.718482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.718649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.718684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.718871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.718905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.719102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.719270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.719492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.719688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.719847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.719993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.720167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.720336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.720527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.720688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.720888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.720925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.721062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.721095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.721226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.721260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.721442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.721475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.721643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.721677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.721837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.721879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.722062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.722110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.722251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.722287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.722491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.722525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.722661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.722881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.722919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.723081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.723115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.723273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.723308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.723492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.723525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.723671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.723705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.466 qpair failed and we were unable to recover it. 00:37:00.466 [2024-07-13 05:26:06.723856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.466 [2024-07-13 05:26:06.723903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.724041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.724075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.724251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.724285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.724426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.724462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.724654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.724688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.724818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.724851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.725939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.725974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.726139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.726171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.726327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.726360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.726520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.726564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.726696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.726730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.726905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.726940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.727076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.727273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.727307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.727441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.727474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.727644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.727677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.727808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.727842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.728017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.728051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.728227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.728260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.728391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.728434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.728625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.728659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.728815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.728848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.729037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.729085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.729241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.729277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.729458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.729492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.729660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.729693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.729857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.729897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.730057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.730091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.730239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.730273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.730465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.730500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.467 qpair failed and we were unable to recover it. 00:37:00.467 [2024-07-13 05:26:06.730677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.467 [2024-07-13 05:26:06.730710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.730852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.730898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.731080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.731113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.731253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.731296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.731489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.731522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.731672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.731705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.731897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.731941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.732082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.732116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.732314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.732346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.732514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.732547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.732676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.732708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.732851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.732927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.733091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.733124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.733259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.733293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.733452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.733485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.733645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.733678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.733850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.733923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.734071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.734106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.734242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.734277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.734476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.734512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.734695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.734730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.734892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.734936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.735129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.735164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.735329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.735362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.735490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.735531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.735696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.735730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.735855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.735897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.736038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.736071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.736206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.736239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.736377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.736411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.736597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.736630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.736787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.736820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.737956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.737990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.738160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.738193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.738353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.738386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.738541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.468 [2024-07-13 05:26:06.738575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.468 qpair failed and we were unable to recover it. 00:37:00.468 [2024-07-13 05:26:06.738712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.738745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.738935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.738968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.739133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.739166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.739301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.739333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.739501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.739534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.739662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.739695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.739822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.739860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.740029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.740062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.740190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.740223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.740417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.740450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.740608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.740645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.740805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.740841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.741062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.741235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.741435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.741608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.741802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.741967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.742166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.742328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.742495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.742684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.742878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.742911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.743064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.743097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.743262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.743296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.743451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.743483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.743634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.743683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.743870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.743907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.744042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.744077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.744253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.744287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.744445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.744480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.744671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.744704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.744841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.744889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.745058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.745091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.745247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.745474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.745509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.745667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.745701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.745871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.745907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.746097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.746131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.746335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.746367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.746511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.469 [2024-07-13 05:26:06.746545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.469 qpair failed and we were unable to recover it. 00:37:00.469 [2024-07-13 05:26:06.746705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.746738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.746875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.746908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.747093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.747127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.747254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.747297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.747455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.747488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.747651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.747684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.747812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.747845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.748955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.748990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.749128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.749342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.749375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.749519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.749552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.749714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.749747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.749882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.749915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.750047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.750081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.750279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.750321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.750492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.750526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.750688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.750885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.750919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.751101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.751275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.751434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.751664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.751822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.751983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.752177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.752375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.752548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.752741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.752935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.752969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.753137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.753326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.753360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.753526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.753559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.753699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.753733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.753887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.753922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.754080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.754113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.470 [2024-07-13 05:26:06.754276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.470 [2024-07-13 05:26:06.754309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.470 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.754457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.754489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.754670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.754703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.754857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.754901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.755057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.755090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.755265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.755298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.755492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.755525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.755671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.755704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.755885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.755934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.756106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.756147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.756329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.756364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.756541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.756576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.756766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.756804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.756952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.756986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.757150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.757184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.757375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.757411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.757569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.757603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.757743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.757777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.757926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.757960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.758135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.758169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.758341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.758377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.758512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.758545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.758697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.758730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.758882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.758924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.759089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.759276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.759474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.759638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.759833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.759979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.760194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.760427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.760603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.760777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.760961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.760996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.761135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.471 [2024-07-13 05:26:06.761190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.471 qpair failed and we were unable to recover it. 00:37:00.471 [2024-07-13 05:26:06.761366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.761399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.761578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.761612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.761794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.761829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.761974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.762199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.762365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.762549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.762722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.762907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.762943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.763109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.763143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.763307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.763340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.763536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.763569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.763725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.763760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.763957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.763996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.764130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.764163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.764381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.764414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.764558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.764591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.764725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.764759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.764926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.764960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.765122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.765155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.765322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.765356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.765513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.765546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.765728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.765762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.765915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.765949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.766111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.766144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.766333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.766367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.766555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.766589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.766769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.766810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.766960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.766995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.767157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.767190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.767350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.767383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.767518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.767551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.767721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.767754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.767891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.767926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.768057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.768090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.768251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.768285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.768449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.768483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.768657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.768691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 [2024-07-13 05:26:06.768862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.768905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 869096 Killed "${NVMF_APP[@]}" "$@" 00:37:00.472 [2024-07-13 05:26:06.769052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.472 [2024-07-13 05:26:06.769086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.472 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.769251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.769285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:00.473 [2024-07-13 05:26:06.769459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.769494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:00.473 [2024-07-13 05:26:06.769655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.769701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:00.473 [2024-07-13 05:26:06.769837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.769877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:00.473 [2024-07-13 05:26:06.770041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.473 [2024-07-13 05:26:06.770232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.770411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.770601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.770759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.770955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.770990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.771124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.771157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.771350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.771385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.771531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.771566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.771719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.771752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.771899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.771933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.772071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.772105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.772263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.772296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.772433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.772466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.772626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.772659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.772827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.772860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.773943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.773984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.774123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.774156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.774287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.774320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=869704 00:37:00.473 [2024-07-13 05:26:06.774450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.774484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 869704 00:37:00.473 [2024-07-13 05:26:06.774615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.774650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 869704 ']' 00:37:00.473 [2024-07-13 05:26:06.774803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:00.473 [2024-07-13 05:26:06.774997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.775044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:00.473 [2024-07-13 05:26:06.775221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.775258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 05:26:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.473 [2024-07-13 05:26:06.775402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.775443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.473 [2024-07-13 05:26:06.775601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.473 [2024-07-13 05:26:06.775635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.473 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.775796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.775830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.775979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.776158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.776351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.776527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.776703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.776886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.776926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.777059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.777093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.777243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.777279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.777446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.777479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.777674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.777709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.777853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.777895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.778962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.778995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.779133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.779166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.779326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.779360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.779501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.779535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.779723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.779756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.779932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.779966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.780098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.780132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.780279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.780313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.780454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.780488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.780643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.780676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.780819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.780853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.781898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.781942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.782081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.782113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.782288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.782322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.782449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.782481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.782633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.782678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.782845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.782890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.783029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.783061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.474 qpair failed and we were unable to recover it. 00:37:00.474 [2024-07-13 05:26:06.783224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.474 [2024-07-13 05:26:06.783257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.783410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.783444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.783610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.783644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.783809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.783843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.784962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.784997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.785138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.785172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.785352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.785385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.785551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.785585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.785747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.785779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.785923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.786123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.786298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.786491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.786655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.786992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.787177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.787374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.787555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.787750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.787922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.787956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.788132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.788308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.788491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.788667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.788840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.788996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.789167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.789326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.789546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.789748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.789943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.789981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.790140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.475 qpair failed and we were unable to recover it. 00:37:00.475 [2024-07-13 05:26:06.790334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.475 [2024-07-13 05:26:06.790372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.790553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.790586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.790754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.790787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.790951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.790985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.791124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.791157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.791294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.791327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.791487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.791519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.791688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.791720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.791863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.791903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.792073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.792239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.792434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.792597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.792804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.792967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.793848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.793981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.794142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.794304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.794527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.794699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.794882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.794916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.795964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.795997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.796156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.796189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.796353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.796386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.796577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.796611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.796770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.796835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.796987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.797021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.797153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.797186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.476 [2024-07-13 05:26:06.797347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.476 [2024-07-13 05:26:06.797380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.476 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.797516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.797550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.797681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.797718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.797879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.797912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.798309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.798473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.798642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.798842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.798999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.799166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.799356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.799547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.799709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.799883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.799917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.800082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.800278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.800476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.800672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.800843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.800980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.801145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.801308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.801502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.801673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.801834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.801874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.802928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.803099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.803132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.803270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.803303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.803465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.803498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.803642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.803676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.803838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.803881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.804024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.804058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.804187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.804220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.804393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.804426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.804584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.477 [2024-07-13 05:26:06.804617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.477 qpair failed and we were unable to recover it. 00:37:00.477 [2024-07-13 05:26:06.804777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.804810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.804975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.805144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.805334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.805494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.805665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.805836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.805880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.806936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.806970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.807140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.807308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.807489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.807653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.807847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.807998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.808199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.808372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.808552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.808719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.808908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.808941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.809069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.809103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.809233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.809266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.810030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.810067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.810222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.810256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.810469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.810503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.810667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.810700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.810833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.810875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.811015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.811048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.811239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.811287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.811434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.811470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.811632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.811666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.811827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.811860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.812015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.812049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.812188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.812221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.812398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.812431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.812599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.812633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.812815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.478 [2024-07-13 05:26:06.812849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.478 qpair failed and we were unable to recover it. 00:37:00.478 [2024-07-13 05:26:06.813007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.813047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.813192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.813226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.813389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.813423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.813557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.813591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.813777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.813827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.813995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.814173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.814370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.814548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.814729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.814946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.814982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.815145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.815187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.815319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.815352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.815489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.815522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.815681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.815715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.815863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.815903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.816080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.816272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.816442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.816643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.816811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.816987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.817162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.817340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.817540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.817717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.817938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.817986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.818131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.818171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.818331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.818365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.818561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.818595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.818749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.818784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.818948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.818983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.819124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.819163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.819367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.819400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.819527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.819560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.819697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.819892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.819940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.820106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.820152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.820309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.820344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.820532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.820565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.820710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.479 [2024-07-13 05:26:06.820750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.479 qpair failed and we were unable to recover it. 00:37:00.479 [2024-07-13 05:26:06.821575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.821610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.821832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.821884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.822947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.822980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.823115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.823148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.823292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.823324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.823488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.823521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.823678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.823710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.823843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.823888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.824043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.824076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.824225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.824258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.824421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.824453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.824580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.824613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.825393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.825435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.825634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.825667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.826404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.826447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.826675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.826709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.826851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.826898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.827928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.828102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.828303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.828345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.828484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.828517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.828650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.828683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.828863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.828923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.829081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.829118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.829293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.829327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.480 qpair failed and we were unable to recover it. 00:37:00.480 [2024-07-13 05:26:06.829503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.480 [2024-07-13 05:26:06.829536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.829721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.829756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.829906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.829941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.830103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.830136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.830315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.830354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.830493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.830527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.830666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.830700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.830846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.830895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.831080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.831277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.831449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.831637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.831819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.831979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.832025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.832231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.832278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.832468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.832512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.832655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.832689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.832828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.833917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.833950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.834104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.834137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.834285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.834317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.834492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.834524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.834682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.834716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.834853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.834900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.835063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.835225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.835418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.835615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.835815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.835968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.836136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.836319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.836543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.836732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.836928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.836961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.837094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-13 05:26:06.837127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.481 qpair failed and we were unable to recover it. 00:37:00.481 [2024-07-13 05:26:06.837305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.837338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.837478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.837511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.837687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.837719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.838530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.838567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.838819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.838852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.839009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.839042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.839182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.839214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.839364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.839412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.839614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.839647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.839799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.839832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.840012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.840062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.840216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.840254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.840433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.840467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.840603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.840638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.840818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.840851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.841020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.841055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.841240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.841273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.841446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.841479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.841649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.841682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.841832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.841904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.842095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.842143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.843194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.843247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.843476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.843512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.843682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.843716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.843891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.843926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.844064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.844097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.844271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.844305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.844499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.844532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.844692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.844725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.844884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.844919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.845060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.845286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.845319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.845514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.845547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.845707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.845746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.845940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.845989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.846163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.846211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.846413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.846473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.846632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.846667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.846800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.846833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.846988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.847022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.847159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-13 05:26:06.847193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.482 qpair failed and we were unable to recover it. 00:37:00.482 [2024-07-13 05:26:06.847347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.847380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.847547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.847579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.847711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.847750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.847938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.847987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.848170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.848205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.848363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.848397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.848591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.848631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.848765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.848799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.848947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.848981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.849124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.849157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.849324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.849357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.849488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.849520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.849679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.849712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.849859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.849896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.850056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.850088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.850257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.850289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.850424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.850456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.850632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.850681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.850860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.850915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.851059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.851094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.851239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.851272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.851409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.851442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.851603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.851778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.851812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.852001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.852049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.852283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.852318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.852510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.852544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.852709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.852743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.852914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.852949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.853122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.853170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.853337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.853372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.853505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.853704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.853737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.853932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.853979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.854177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.854212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.854379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.854412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.854575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.854608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.854741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.854774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.854915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.854949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.855092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.855126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.483 [2024-07-13 05:26:06.855307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-13 05:26:06.855345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.483 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.855506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.855539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.855674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.855713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.855844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.855887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.856958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.856993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.857130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.857329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.857362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.857519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.857553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.857708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.857741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.857903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.857937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.858069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.858102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.858288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.858322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.858482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.858516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.858651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.858810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.858843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.859063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.859241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.859437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.859621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.859817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.859997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.860045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.860226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.860261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.860430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.860465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.860599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.860632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.860807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.860842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.860997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.861176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.861372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.861559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.861721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.861926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.861963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.484 [2024-07-13 05:26:06.862100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.484 [2024-07-13 05:26:06.862145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.484 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.862313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.862347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.862509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.862542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.862684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.862718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.862887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.862922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.863081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.863115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.863281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.863319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.863471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.863504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.863661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.863694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.863846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.863894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.864087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.864262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.864459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.864651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.864818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.864968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.865168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.865371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.865588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.865773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.865963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.865997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.866135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.866168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.866341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.866373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.866510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.866542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.866755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.866788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.866940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.866934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:00.485 [2024-07-13 05:26:06.866975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.867071] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.485 [2024-07-13 05:26:06.867116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.867150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.867329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.867360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.867498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.867531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.867713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.867761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.867922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.867959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.868128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.868170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.868349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.868384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.868550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.868584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.868743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.868777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.868941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.868976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.869141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.869181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.869339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.869386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.869562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.869596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.869757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-07-13 05:26:06.869792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-07-13 05:26:06.870046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.870081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.870237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.870271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.870436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.870469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.870640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.870673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.870819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.870862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.871017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.871057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.871262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.871297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.871462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.871495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.871634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.871667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.871809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.871842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.872056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.872103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.872296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.872333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.872504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.872539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.872715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.872750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.872887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.872922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.873061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.873095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.873297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.873331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.873462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.873495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.873634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.873669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.873861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.873919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.874088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.874123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.874269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.874303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.874473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.874507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.874672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.874706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.874859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.874899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.875092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.875296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.875469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.875637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.875805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.875974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.876008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.876163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.876211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.876378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.876416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.876584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.876618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.876781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.876815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.877013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.877048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.877194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.877228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.877396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.877431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.877627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.877660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.877827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.877860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.878025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-07-13 05:26:06.878058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-07-13 05:26:06.878225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.878258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.878393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.878425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.878564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.878599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.878782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.878816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.878986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.879025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.879218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.879252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.879419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.879453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.879585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.879618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.879781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.879815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.879972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.880005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.880162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.880195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.880379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.880412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.880600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.880632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.880795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.880828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.881043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.881250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.881421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.881620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.881824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.881997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.882055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.882228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.882263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.882433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.882466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.882634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.882666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.882827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.882861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.883028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.883072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.883224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.883261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.883423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.883458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.883617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.883651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.883817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.883859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.884033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.884067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.884221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.884254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.884416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.884450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.884612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.884644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.884841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.884891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.885023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.885056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.885210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.885258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.885430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.885467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.885640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.885674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.885846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.885903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.886074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.886108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.886249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-07-13 05:26:06.886283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-07-13 05:26:06.886447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.886480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.886675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.886709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.886878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.886912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.887061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.887113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.887268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.887303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.887462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.887495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.887653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.887686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.887844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.887894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.888081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.888113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.888301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.888333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.888467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.888500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.888625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.888835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.888898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.889051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.889087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.889287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.889321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.889512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.889546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.889707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.889741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.889911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.889946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.890139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.890181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.890378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.890412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.890550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.890595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.890760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.890796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.890931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.890965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.891142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.891198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.891371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.891406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.891568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.891602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.891760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.891794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.891980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.892028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.892200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.892236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.892407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.892442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.892606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.892640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.892799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.892832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.893046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.893093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.893273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.893307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.893474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.893509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.893674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-07-13 05:26:06.893707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-07-13 05:26:06.893874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.893907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.894056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.894089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.894258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.894291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.894446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.894478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.894655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.894687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.894858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.894900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.895087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.895120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.895260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.895298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.895466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.895498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.895628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.895660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.895789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.895822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.896026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.896059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.896213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.896428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.896465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.896626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.896660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.896827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.896861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.897054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.897088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.897255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.897289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.897447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.897481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.897678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.897712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.897885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.897920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.898067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.898101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.898271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.898303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.898433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.898466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.898651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.898684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.898819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.898852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.899006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.899054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.899240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.899274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.899425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.899458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.899647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.899681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.899874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.899907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.900067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.900100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.900421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.900454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.900624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.900658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.900828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.900862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.901024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.901057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.901227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.901260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.901394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.901427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.901594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.901627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-07-13 05:26:06.901802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-07-13 05:26:06.901834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.901985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.902202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.902250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.902444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.902480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.902641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.902674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.902804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.902837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.902999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.903174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.903371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.903540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.903734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.903917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.903950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.904117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.904489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.904661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.904827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.904996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.905031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.905217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.905266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.905424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.905460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.905630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.905665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.905839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.905890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.906051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.906084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.906252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.906286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.906447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.906480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.906618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.906650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.906832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.906871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.907244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.907439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.907604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.907807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.907991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.908039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.908207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.908254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.908440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.908488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.908690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.908727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.908874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.908911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.909052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.909087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.909230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.909264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.909399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.909432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.909607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.909640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-07-13 05:26:06.909808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-07-13 05:26:06.909840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.909988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.910020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.910179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.910212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.910399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.910563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.910595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.910751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.910784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.910969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.911024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.911193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.911253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.911428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.911465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.911625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.911659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.911801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.911836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.911987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.912162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.912334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.912508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.912703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.912947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.912986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.913131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.913166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.913331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.913366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.913556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.913590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.913745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.913779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.913912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.914116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.914151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.914322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.914354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.914496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.914529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.914691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.914724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.914877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.914926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.915112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.915160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.915315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.915353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.915510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.915544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.915709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.915742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.915892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.915926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.916082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.916114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.916318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.916484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.916517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.916667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.916703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.916864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.916906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.917072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.917106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.917255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.917289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.917481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.917515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.917696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-07-13 05:26:06.917744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-07-13 05:26:06.917943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.917991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.918157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.918204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.918379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.918415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.918548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.918582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.918724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.918757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.918938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.918992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.919168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.919204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.919373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.919407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.919584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.919620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.919755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.919788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.919957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.920005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.920181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.920225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.920395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.920440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.920582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.920616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.920800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.920836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.921001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.921049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.921214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.921249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.921390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.921425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.921628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.921663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.921808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.921842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.922024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.922057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.922220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.922253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.922426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.922459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.922621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.922655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.922795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.922828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.923004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.923275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.923311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.923477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.923511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.923649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.923684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.923858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.923898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.924075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.924110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.924311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-07-13 05:26:06.924344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-07-13 05:26:06.924546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.924594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.924805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.924840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.925039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.925088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.925268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.925304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.925502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.925536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.925677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.925710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.925875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.925910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.926068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.926101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.926275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.926308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.926477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.926510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.926670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.926703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.926829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.926863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.927041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.927077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.927214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.927253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.927453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.927488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.927629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.927668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.927822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.927856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.928087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.928270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.928472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.928671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.928849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.928999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.929032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.929194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.929227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.929399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.929433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.929592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.929767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.929800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.929980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.930028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.930199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.930247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.930428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.930464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.930605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.930641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.930803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.930837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.930985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.931155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.931339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.931525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.931728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.931931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.931967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.777 [2024-07-13 05:26:06.932118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.777 [2024-07-13 05:26:06.932166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.777 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.932342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.932376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.932554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.932587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.932724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.932757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.932890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.932924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.933072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.933120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.933263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.933299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.933439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.933474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.933635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.933668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.933806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.933840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.934022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.934070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.934231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.934266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.934427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.934462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.934617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.934650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.934843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.934894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.935049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.935113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.935295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.935330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.935470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.935504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.935656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.935689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.935817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.935858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.936006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.936039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.936207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.936240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.936376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.936410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.936587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.936620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.936787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.936824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.937021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.937069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.937232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.937279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.937448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.937482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.937646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.937680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.937859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.937905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.938039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.938073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.938257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.938305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.938449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.938485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.938662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.938832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.938873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.939038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.939073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.939251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.939286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.939462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.939498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.939685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.939719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.939878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.939914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.940055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.778 [2024-07-13 05:26:06.940090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.778 qpair failed and we were unable to recover it. 00:37:00.778 [2024-07-13 05:26:06.940232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.940266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.940427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.940465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.940630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.940664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.940806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.940840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.941009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.941056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.941261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.941297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.941474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.941508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.941656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.941690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.941820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.941859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.942012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.942045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.942193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.942226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.942424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.942457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.942591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.942624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.942804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.943043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.943090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.943293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.943341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.943527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.943564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.943767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.943943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.943977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.944158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.944191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.944329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.944362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.944502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.944535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.944727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.944909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.944943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.945154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.945327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.945361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.945500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.945533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.945672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.945706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.945864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.945921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.946069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.946105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.946319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.946355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.946491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.946534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.946699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.946733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.946893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.946928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.947109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.947158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.947314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.947349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.947492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.947525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.947717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.947749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.947887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.947921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.948056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.948089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.779 [2024-07-13 05:26:06.948282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.779 [2024-07-13 05:26:06.948315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.779 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.948452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.948490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.948653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.948686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.948847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.948892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.949043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.949091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.949252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.949496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.949531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.949665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.949700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.949880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.949915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.950083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.950119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.950294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.950327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.950462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.950495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.950628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.950662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.780 [2024-07-13 05:26:06.950820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.950886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.951058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.951111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.951296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.951333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.951469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.951503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.951662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.951695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.951875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.951909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.952964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.952998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.953154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.953189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.953373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.953421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.953564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.953599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.953770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.953803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.953975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.954191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.954434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.954604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.954765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.954952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.954986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.955141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.955186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.955373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.955407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.955549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.955582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.955746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.955780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.955938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.955985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.956152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-07-13 05:26:06.956188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.780 qpair failed and we were unable to recover it. 00:37:00.780 [2024-07-13 05:26:06.956365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.956400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.956576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.956771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.956819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.956980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.957029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.957203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.957238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.957427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.957461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.957629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.957663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.957822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.957856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.958030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.958065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.958216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.958250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.958410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.958443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.958637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.958670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.958805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.958838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.959898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.959947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.960123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.960159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.960326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.960360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.960526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.960559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.960721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.960755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.960891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.960926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.961101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.961134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.961302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.961336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.961475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.961508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.961694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.961729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.961871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.961906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.962043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.962077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.962224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.962258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.962419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.962452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.781 [2024-07-13 05:26:06.962582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-07-13 05:26:06.962618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.781 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.962747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.962780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.962969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.963017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.963194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.963241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.963418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.963628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.963674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.963809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.963843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.963995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.964029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.964219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.964255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.964397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.964432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.964605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.964639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.964797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.964831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.964981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.965155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.965343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.965509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.965708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.965935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.965984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.966169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.966216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.966394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.966431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.966599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.966632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.966790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.966831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.967050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.967244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.967424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.967619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.967836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.967995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.968043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.968211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.968259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.968402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.968438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.968603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.968637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.968769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.968803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.968974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.969022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.969202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.782 [2024-07-13 05:26:06.969237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.782 qpair failed and we were unable to recover it. 00:37:00.782 [2024-07-13 05:26:06.969403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.969437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.969606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.969640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.969800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.969834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.969987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.970021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.970171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.970204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.970393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.970426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.970562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.970596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.970758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.970806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.971042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.971222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.971417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.971594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.971812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.971982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.972017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.972215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.972264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.972449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.972496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.972635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.972669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.972809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.972843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.973903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.973937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.974078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.974111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.974288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.974320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.974481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.974513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.974674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.974713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.974860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.974916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.975062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.975098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.975237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.975271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.975401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.975435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.975618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.975677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.975837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.975904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.976080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.976128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.976272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.976440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.976473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.976685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.783 [2024-07-13 05:26:06.976718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.783 qpair failed and we were unable to recover it. 00:37:00.783 [2024-07-13 05:26:06.976869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.976905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.977061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.977108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.977295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.977331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.977518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.977552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.977689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.977723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.977899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.977948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.978137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.978184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.978356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.978391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.978584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.978724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.978757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.978905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.978939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.979102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.979134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.979270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.979302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.979436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.979468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.979595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.979628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.979782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.979830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.980065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.980247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.980426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.980621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.980838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.980993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.981160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.981331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.981522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.981692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.981894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.981927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.982094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.982266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.982448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.982622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.982819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.982974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.983180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.983371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.983562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.983738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.983929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.983978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.984128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.984162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.984323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.984356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.784 [2024-07-13 05:26:06.984515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.784 [2024-07-13 05:26:06.984548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.784 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.984683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.984717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.984877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.984910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.985108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.985301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.985467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.985638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.985805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.985990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.986172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.986404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.986579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.986755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.986916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.986951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.987083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.987117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.987280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.987313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.987455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.987489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.987648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.987681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.987819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.987854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.988042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.988090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.988241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.988277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.988468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.988502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.988637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.988830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.988871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.989047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.989263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.989433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.989625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.989824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.989968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.990150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.990198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.990352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.990387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.990528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.990695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.785 [2024-07-13 05:26:06.990728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.785 qpair failed and we were unable to recover it. 00:37:00.785 [2024-07-13 05:26:06.990893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.990928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.991059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.991092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.991248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.991281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.991441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.991474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.991623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.991657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.991806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.991841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.992008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.992056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.992244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.992292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.992452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.992488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.992656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.992690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.992855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.992905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.993103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.993279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.993474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.993649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.993854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.993993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.994201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.994233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.994401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.994434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.994628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.994661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.994805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.994838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.995005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.995052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.995234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.995274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.995441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.995475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.995635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.995668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.995813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.995846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.996960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.996994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.997162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.997209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.997383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.997418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.997585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.997632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.997791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.997823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.997990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.998037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.998200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.998237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.998367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.998401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.998542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.998577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.786 [2024-07-13 05:26:06.998742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.786 [2024-07-13 05:26:06.998776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.786 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.998904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.998937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.999135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.999298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.999456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.999618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:06.999814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:06.999981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.000014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.000188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.000223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.000397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.000431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.000596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.000631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.000790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.000968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.001156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.001346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.001520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.001729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.001904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.001939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.002126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.002174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.002326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.002363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.002526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.002560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.002727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.002761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.002928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.002968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.003111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.003145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.003302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.003336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.003514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.003548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.003708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.003743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.003876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.004090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.004138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.004310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.004347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.004485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.004519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.004679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.004712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.004846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.004886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.005931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.005966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.006154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.006187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.006356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.006389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.787 qpair failed and we were unable to recover it. 00:37:00.787 [2024-07-13 05:26:07.006521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.787 [2024-07-13 05:26:07.006555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.006693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.006725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.006863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.006905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.007051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.007084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.007242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.007275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.007416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.007484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.007646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.007679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.007829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.007887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.008079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.008127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.008292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.008328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.008526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.008561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.008694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.008727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.008905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.008954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.009128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.009173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.009335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.009369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.009517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.009550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.009733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.009767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.009914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.009947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.010103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.010151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.010317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.010352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.010484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.010517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.010678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.010717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.010882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.011117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.011167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.011312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.011347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.011511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.011544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.011685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.011719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.011893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.011927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.012075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.012122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.012323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.012357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.012525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.012558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.012695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.012728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.012892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.012926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.013076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.013124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.013315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.013349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.013491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.013525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.013667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.013701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.013864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.013905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.014074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.014107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.014280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.014312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.788 [2024-07-13 05:26:07.014455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.788 [2024-07-13 05:26:07.014489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.788 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.014621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.014654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.014785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.014818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.014973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.015008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.015194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.015242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.015423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.015471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.015644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.015679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.015809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.015843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.016001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.016038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.016224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.016271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.016467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.016503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.016667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.016702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.016830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.016883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.017062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.017300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.017503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.017674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.017855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.017997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.018030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.018205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.018238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.018380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.018412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.018589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.018627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.018787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.018820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.018971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.019184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.019399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.019599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.019769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.019946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.020109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.020150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.020316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.020349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.020495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.020530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.020693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.020726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.020861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.020902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.021075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.021253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.021450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.021641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.021843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:00.789 [2024-07-13 05:26:07.021971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.022002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.022136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.789 [2024-07-13 05:26:07.022175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.789 qpair failed and we were unable to recover it. 00:37:00.789 [2024-07-13 05:26:07.022303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.022336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.022469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.022502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.022664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.022696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.022837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.022888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.023031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.023065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.023237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.023284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.023426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.023462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.023634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.023668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.023814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.023846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.024024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.024071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.024269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.024317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.024481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.024517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.024684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.024719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.024885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.024920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.025081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.025126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.025262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.025294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.025486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.025519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.025656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.025688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.025876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.025924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.026068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.026104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.026259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.026293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.026454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.026487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.026645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.026679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.026815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.026848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.027937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.027971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.028122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.028178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.028332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.028366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.028528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.028560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.028724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.028762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.790 [2024-07-13 05:26:07.028934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.790 [2024-07-13 05:26:07.028983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.790 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.029150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.029207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.029390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.029428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.029566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.029600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.029742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.029776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.029942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.029977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.030141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.030175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.030317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.030349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.030483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.030517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.030676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.030708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.030875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.030909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.031070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.031118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.031322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.031357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.031506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.031541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.031669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.031703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.031835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.031875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.032940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.032973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.033123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.033161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.033323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.033356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.033497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.033529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.033669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.033701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.033888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.033944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.034111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.034322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.034356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.034494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.034528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.034694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.034727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.034924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.034974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.035132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.035181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.035356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.035391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.035554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.035587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.035726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.035759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.035915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.035949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.036080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.036113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.036285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.036318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.036447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.036485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.036678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.036709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.791 [2024-07-13 05:26:07.036894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.791 [2024-07-13 05:26:07.036950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.791 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.037141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.037214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.037385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.037421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.037582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.037617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.037789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.037823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.037988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.038166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.038360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.038525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.038696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.038888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.038928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.039069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.039101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.039305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.039344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.039484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.039681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.039715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.039885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.040065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.040097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.040290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.040323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.040467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.040499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.040641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.040673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.040804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.040837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.041026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.041074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.041226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.041262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.041407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.041441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.041584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.041618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.041802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.041851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.042966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.043136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.043174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.043373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.043404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.043575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.043607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.043769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.043802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.043938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.043970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.044119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.044161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.044306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.044344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.044477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.044509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.044671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.044703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.792 qpair failed and we were unable to recover it. 00:37:00.792 [2024-07-13 05:26:07.044839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.792 [2024-07-13 05:26:07.044878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.045935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.045995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.046152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.046208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.046379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.046608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.046642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.046784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.046819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.047009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.047043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.047218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.047251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.047413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.047609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.047641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.047862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.048940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.048974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.049118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.049168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.049321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.049356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.049536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.049570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.049737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.049771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.049913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.049947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.050107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.050140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.050301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.050334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.050500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.050533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.050667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.050701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.050881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.050937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.051086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.051132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.051301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.051334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.051472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.051504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.051651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.051683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.051824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.051857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.052023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.052076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.052259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.052294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.052430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.052464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.052599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.793 [2024-07-13 05:26:07.052632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.793 qpair failed and we were unable to recover it. 00:37:00.793 [2024-07-13 05:26:07.052762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.052796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.052983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.053031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.053207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.053241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.053405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.053438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.053605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.053637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.053793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.053825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.053993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.054179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.054375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.054585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.054764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.054938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.054973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.055141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.055175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.055341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.055375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.055565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.055597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.055762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.055795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.055939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.055972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.056107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.056139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.056325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.056358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.056494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.056525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.056661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.056693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.056848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.056888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.057029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.057061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.057265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.057314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.057467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.057503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.057676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.057712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.057847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.057890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.058060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.058260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.058457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.058653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.058824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.058993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.059025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.059184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.059217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.059379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.794 [2024-07-13 05:26:07.059412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.794 qpair failed and we were unable to recover it. 00:37:00.794 [2024-07-13 05:26:07.059586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.059771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.059808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.059949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.059983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.060164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.060211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.060371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.060420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.060592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.060805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.060838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.060992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.061028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.061185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.061219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.061359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.061393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.061558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.061591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.061744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.061792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.061975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.062147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.062343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.062551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.062749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.062935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.062982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.063126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.063298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.063330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.063470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.063503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.063640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.063673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.063835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.064919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.064967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.065116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.065151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.065339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.065373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.065538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.065571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.065735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.065776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.065920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.065955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.066117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.066290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.066324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.066483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.066516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.066700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.066735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.066911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.066960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.067141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.067177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.795 [2024-07-13 05:26:07.067333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.795 [2024-07-13 05:26:07.067368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.795 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.067502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.067542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.067714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.067748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.067889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.067924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.068098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.068145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.068324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.068359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.068500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.068534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.068695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.068728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.068884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.068917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.069075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.069108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.069278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.069468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.069499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.069641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.069686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.069843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.069882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.070049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.070260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.070466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.070662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.070823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.070971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.071161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.071369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.071536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.071732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.071912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.071947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.072106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.072140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.072299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.072333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.072521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.072555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.072692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.072725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.072856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.072902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.073067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.073100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.073257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.073290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.073478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.073511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.073677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.073711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.073873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.073922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.074093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.074129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.074322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.074356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.074506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.074541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.074697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.074743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.074924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.074959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.075127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.075160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.796 qpair failed and we were unable to recover it. 00:37:00.796 [2024-07-13 05:26:07.075318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.796 [2024-07-13 05:26:07.075352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.075502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.075537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.075690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.075724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.075889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.075939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.076101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.076149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.076297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.076332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.076502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.076535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.076695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.076727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.076889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.076924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.077083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.077116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.077248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.077280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.077433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.077466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.077653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.077686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.077832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.077888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.078069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.078287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.078322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.078476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.078510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.078677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.078711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.078890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.078924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.079095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.079267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.079457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.079655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.079827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.079972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.080167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.080396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.080571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.080740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.080946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.080995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.081167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.081203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.081394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.081429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.081588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.081791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.081826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.081976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.082010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.082201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.082236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.082370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.082404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.082573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.082607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.082794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.082828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.083014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.083048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.083210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.797 [2024-07-13 05:26:07.083244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.797 qpair failed and we were unable to recover it. 00:37:00.797 [2024-07-13 05:26:07.083417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.083452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.083596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.083630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.083767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.083801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.084003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.084167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.084200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.084375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.084408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.084544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.084579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.084754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.084788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.084965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.085177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.085374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.085537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.085733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.085932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.085966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.086127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.086160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.086326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.086359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.086518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.086551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.086714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.086748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.086917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.086951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.087081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.087114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.087270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.087303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.087449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.087482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.087657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.087690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.087849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.087898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.088096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.088261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.088451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.088644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.088819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.088987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.089020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.089181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.089215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.089399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.089431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.089604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.089638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.089823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.089857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.798 [2024-07-13 05:26:07.090008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.798 [2024-07-13 05:26:07.090041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.798 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.090211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.090259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.090456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.090491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.090658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.090692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.090850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.090890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.091033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.091239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.091272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.091426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.091459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.091617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.091650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.091781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.091814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.092895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.092930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.093114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.093161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.093338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.093373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.093511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.093545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.093728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.093762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.093893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.093927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.094063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.094096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.094286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.094319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.094482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.094516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.094672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.094705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.094844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.094885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.095038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.095086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.095260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.095453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.095487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.095631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.095666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.095830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.095878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.096024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.096059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.096244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.096297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.096440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.096475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.096645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.096680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.096835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.096875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.799 qpair failed and we were unable to recover it. 00:37:00.799 [2024-07-13 05:26:07.097951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.799 [2024-07-13 05:26:07.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.098125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.098158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.098301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.098334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.098491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.098524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.098685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.098717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.098889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.098925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.099070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.099105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.099267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.099302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.099441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.099475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.099651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.099699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.099977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.100013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.100194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.100242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.100411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.100447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.100589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.100622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.100787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.100820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.100993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.101168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.101365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.101570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.101763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.101934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.101969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.102131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.102165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.102306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.102339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.102484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.102519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.102659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.102694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.102861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.102903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.103037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.103070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.103251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.103298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.103478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.103513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.103665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.103699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.103877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.103911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.104082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.104120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.104284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.104317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.104457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.104490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.104674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.104707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.104882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.104931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.105076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.105110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.105273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.105306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.105444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.105477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.105665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.105697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.105829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.800 [2024-07-13 05:26:07.105861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.800 qpair failed and we were unable to recover it. 00:37:00.800 [2024-07-13 05:26:07.106023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.106071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.106250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.106285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.106420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.106453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.106614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.106647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.106811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.106844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.106994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.107027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.107187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.107234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.107382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.107416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.107560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.107595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.107755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.107788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.107982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.108200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.108236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.108379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.108414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.108564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.108598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.108757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.108791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.108979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.109175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.109355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.109529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.109703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.109920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.109953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.110108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.110156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.110321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.110356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.110604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.110637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.110774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.110807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.110962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.110998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.111161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.111194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.111335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.111369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.111530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.111564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.111701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.111736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.111882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.111921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.112959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.112994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.113172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.113233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.113390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.113424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.113562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.113595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.801 [2024-07-13 05:26:07.113757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.801 [2024-07-13 05:26:07.113790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.801 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.113925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.113959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.114160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.114195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.114330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.114374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.114516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.114550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.114690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.114724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.114884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.114918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.115936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.115969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.116108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.116140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.116294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.116486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.116519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.116679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.116711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.116915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.116962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.117129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.117191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.117404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.117440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.117629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.117663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.117798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.117832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.117982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.118016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.118148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.118181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.118404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.118442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.118592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.118625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.118801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.118849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.119030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.119064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.119218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.119252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.119409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.119442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.119605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.119645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.119805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.119838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.120011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.120047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.120247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.120294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.120466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.120500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.120663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.120696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.120840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.120879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.121046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.121078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.121207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.121239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.121427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.121459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.121628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.121662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.802 qpair failed and we were unable to recover it. 00:37:00.802 [2024-07-13 05:26:07.121829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.802 [2024-07-13 05:26:07.121870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.122018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.122052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.122220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.122255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.122429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.122474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.122656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.122704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.122893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.122929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.123078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.123112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.123274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.123306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.123454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.123486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.123663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.123695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.123860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.123914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.124057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.124092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.124253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.124287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.124476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.124510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.124664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.124698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.124883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.124931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.125112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.125159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.125325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.125360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.125503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.125535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.125697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.125730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.125915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.125949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.126124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.126288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.126481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.126670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.126834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.126979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.127012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.127195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.127243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.127413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.127449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.127601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.127640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.127805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.127839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.128023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.128071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.128214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.128250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.128393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.128426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.128590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.803 [2024-07-13 05:26:07.128623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.803 qpair failed and we were unable to recover it. 00:37:00.803 [2024-07-13 05:26:07.128785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.128817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.128966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.128998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.129135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.129168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.129328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.129360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.129525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.129557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.129691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.129723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.129859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.130035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.130067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.130262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.130298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.130431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.130465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.130594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.130627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.130788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.130823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.131954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.131987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.132130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.132163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.132321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.132353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.132520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.132552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.132687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.132720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.132863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.132905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.133073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.133105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.133293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.133326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.133458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.133491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.133664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.133713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.133890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.133928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.134098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.134146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.134326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.134360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.134495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.134528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.134662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.134693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.134853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.134891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.135077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.135271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.135437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.135645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.135827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.135999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.136034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.136209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.136243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.136405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.804 [2024-07-13 05:26:07.136438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.804 qpair failed and we were unable to recover it. 00:37:00.804 [2024-07-13 05:26:07.136610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.136657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.136913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.136950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.137094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.137127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.137271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.137303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.137461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.137493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.137621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.137652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.137809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.137856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.138943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.138991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.139148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.139182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.139340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.139373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.139511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.139545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.139676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.139708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.139877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.139912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.140047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.140081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.140220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.140254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.140437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.140485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.140633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.140666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.140828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.140861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.141033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.141066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.141253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.141285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.141496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.141529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.141736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.141771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.141913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.141947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.142081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.142115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.142274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.142307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.142466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.142500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.142638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.142672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.142863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.142902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.143064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.143102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.143292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.143325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.143462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.143496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.143630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.143663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.143835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.143875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.144029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.144077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.144247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.144282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.805 [2024-07-13 05:26:07.144454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.805 [2024-07-13 05:26:07.144487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.805 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.144623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.144655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.144812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.144845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.145898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.145934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.146066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.146264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.146298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.146449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.146496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.146669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.146704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.146874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.146909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.147084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.147258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.147432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.147610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.147807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.147980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.148179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.148375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.148572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.148744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.148915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.148955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.149124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.149317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.149520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.149694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.149858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.149998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.150029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.150185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.150232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.150417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.150616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.150655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.150796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.150829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.150996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.151045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.151190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.151226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.151394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.151428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.151596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.151629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.151779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.151810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.152006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.152039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.806 [2024-07-13 05:26:07.152181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.806 [2024-07-13 05:26:07.152212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.806 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.152368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.152402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.152535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.152567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.152707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.152739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.152884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.152918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.153078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.153110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.153259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.153291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.153438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.153470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.153655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.153688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.153848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.153888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.154079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.154111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.154252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.154284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.154419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.154452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.154607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.154639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.154812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.154845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.155020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.155068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.155225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.155262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.155431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.155465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.155631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.155664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.155800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.155833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.156074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.156246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.156438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.156625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.156830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.156984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.157160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.157374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.157550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.157720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.157893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.157933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.158079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.158112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.158327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.158365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.158533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.158567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.158713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.807 [2024-07-13 05:26:07.158746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.807 qpair failed and we were unable to recover it. 00:37:00.807 [2024-07-13 05:26:07.158913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.158946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.159081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.159114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.159291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.159324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.159488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.159520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.159680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.159711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.159893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.159938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.160112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.160145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.160282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.160314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.160447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.160479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.160617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.160651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.160810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.160843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.161007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.161039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.161177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.161218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.161393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.161567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.161599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.161785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.161835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.162081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.162256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.162463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.162625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.162815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.162973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.163146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.163310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.163550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.163724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.163923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.163956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.164100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.164133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.164275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.164308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.164515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.164547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.164732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.164765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.164907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.164941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.165084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.165118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.165280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.165468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.165500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.165654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.165686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.165842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.165900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.166121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.166179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.166334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.166370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.166503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.166538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.166680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.808 [2024-07-13 05:26:07.166713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.808 qpair failed and we were unable to recover it. 00:37:00.808 [2024-07-13 05:26:07.166896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.166931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.167094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.167128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.167293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.167327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.167462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.167497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.167683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.167716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.167879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.167913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.168071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.168105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.168275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.168309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.168446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.168493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.168635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.168669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.168832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.168874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.169040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.169279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.169454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.169488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.169618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.169652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.169817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.169851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.170952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.170987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.171124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.171159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.171320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.171354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.171544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.171577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.171731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.171778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.171936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.171972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.172157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.172190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.172354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.172387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.172517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.172550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.172690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.172724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.172904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.172938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.173102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.173275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.173308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.173470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.173503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.173661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.173694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.173861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.173908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.174072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.174105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.174285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.174319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.174480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.174524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.809 [2024-07-13 05:26:07.174714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.809 [2024-07-13 05:26:07.174747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.809 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.174886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.174920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.175085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.175118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.175256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.175289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.175450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.175483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.175648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.175681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.175821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.175854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.176018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.176065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.176207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.176242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.176385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.176418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.176584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.176617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.176819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.176851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.177030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.177063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.177230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.177264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.177403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.177436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.177597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.177630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.177809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.177842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.178046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.178251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.178418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.178616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.178780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.178952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.179000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.179179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.179235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.179437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.179473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.179642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.179676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.179876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.179911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.180079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.180112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.180253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.180287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.180471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.180504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.180669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.180701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.180954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.180988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.181174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.181221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.181366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.181402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.181565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.181599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.181761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.181793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.181954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.182163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.182357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.182539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.182746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.810 qpair failed and we were unable to recover it. 00:37:00.810 [2024-07-13 05:26:07.182951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.810 [2024-07-13 05:26:07.182986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.183160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.183194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.183331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.183364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.183524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.183556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.183720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.183753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.183892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.183926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.184089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.184122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.184290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.184324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.184463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.184496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.184640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.184673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.184839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.184879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.185042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.185074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.185239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.185272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.185462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.185495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.185657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.185691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.185825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.185858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.186848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.186886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.187084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.187255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.187424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.187596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.187798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.187981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.188018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.188222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.188269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.188417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.188451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.188616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.188837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.188876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.189008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.189040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.189195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.189243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.811 [2024-07-13 05:26:07.189423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.811 [2024-07-13 05:26:07.189459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.811 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.189628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.189662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.189809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.189842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.189983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.190152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.190319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.190513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.190685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.190883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.190931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.191077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.191114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.191266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.191314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.191466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.191502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.191664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.191698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.191840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.191886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.192058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.192272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.192443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.192613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.192803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.192985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.193034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.193190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.193238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.193442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.193477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.193725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.193758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.193945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.193980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.194153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.194189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.194326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.194359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.194499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.194532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.194696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.194729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.194871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.194912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.195063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.195096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.195258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.195291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.195439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.195486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.195632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.195827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.195860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.196086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.196271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.196446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.196639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.196827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.196975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.197008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.197144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.197176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.812 qpair failed and we were unable to recover it. 00:37:00.812 [2024-07-13 05:26:07.197334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.812 [2024-07-13 05:26:07.197366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.197541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.197575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.197733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.197766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.197949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.197997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.198170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.198206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.198344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.198379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.198515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.198550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.198728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.198775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.198965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.199165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.199343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.199503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.199675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.199846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.199885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.200056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.200089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.200224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.200258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.200402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.200435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.200597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.200635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.200793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.200825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.201249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.201424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.201644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.201807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.201986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.202164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.202353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.202513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.202707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.202896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.202944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.203114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.203162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.203333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.203370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.203506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.203542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.203704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.203737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.203884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.203939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.204104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.204283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.204315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.204452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.204645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.204678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.204811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.204843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.205048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.813 [2024-07-13 05:26:07.205080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.813 qpair failed and we were unable to recover it. 00:37:00.813 [2024-07-13 05:26:07.205224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.205255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.205416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.205448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.205603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.205636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.205778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.205814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.205980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.206192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.206381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.206542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.206762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.206961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.206995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.207158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.207191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.207317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.207350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.207489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.207521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.207693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.207726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.207922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.207971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.208114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.208149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.208315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.208349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.208489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.208522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.208654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.208825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.208859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.209044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.209091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.209258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.209293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.209449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.209522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.209662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.209694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.209853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.209892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.210066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.210267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.210457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.210829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.210979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.211014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.211181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.211215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.211404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.211437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.211607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.211639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.211796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.211828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.211968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.212165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.212338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.212534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.212702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.212918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.814 [2024-07-13 05:26:07.212951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.814 qpair failed and we were unable to recover it. 00:37:00.814 [2024-07-13 05:26:07.213131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.213179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.213352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.213388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.213549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.213583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.213761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.213795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.213964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.213999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.214132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.214165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.214342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.214376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.214513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.214545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.214766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.214800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.214935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.214968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.215108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.215140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.215268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.215300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.215444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.215477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.215625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.215658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.215823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.215855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.216914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.216961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.217149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.217197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.217357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.217393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.217584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.217618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.217781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.217814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.217992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.218216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.218250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.218413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.218446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.218608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.218639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.218804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.218836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.219941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.219977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.815 qpair failed and we were unable to recover it. 00:37:00.815 [2024-07-13 05:26:07.220142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.815 [2024-07-13 05:26:07.220177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.220311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.220344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.220481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.220515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.220684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.220718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.220890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.220924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.221092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.221140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.221312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.221347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.221502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.221535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.221701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.221734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.221977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.222011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.222198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.222231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.222386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.222551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.222584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.222741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.222789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.222997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.223045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.223227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.223275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.223459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.223494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.223637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.223671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.223857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.223897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.224057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.224254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.224287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.224422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.224620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.816 [2024-07-13 05:26:07.224653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.816 qpair failed and we were unable to recover it. 00:37:00.816 [2024-07-13 05:26:07.224799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.224847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.225022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.225056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.225192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.225226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.225390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.225429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.225616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.225650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.225783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.225816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.226954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.226988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.227149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.227182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.227313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.227345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.227510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.227542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.227677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.227712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.227891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.227940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.228111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.228148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.228288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.228323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.228485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.228519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.228704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.228752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.228919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.228953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.229121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.229169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.229317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.229352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.229511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.229545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.229722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.229756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.229904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.229939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.230124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.230310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.230344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.230537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.230570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.230728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.230776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.230955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.231003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.231147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.231181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.231346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.231381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.231576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.231609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.231778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.231811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.231983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.232018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.232183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.232217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.232365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.232400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.232591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.232636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.817 [2024-07-13 05:26:07.232789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.817 [2024-07-13 05:26:07.232837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.817 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.232992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.233029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.233230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.233264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.233441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.233474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.233640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.233673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.233830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.233863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.234038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.234077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.234246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.234279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.234439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.234472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.234714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.234746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.234902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.234936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.235092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.235125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.235285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.235330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.235470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.235503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.235641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.235674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.235833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.235872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.236034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.236083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.236248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.236295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.236466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.236501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.236666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.236699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.236843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.236886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.237051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.237083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.237272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.237307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.237475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.237509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.237648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.237681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.237843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.237883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.238089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.238137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.238302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.238337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.238473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.238506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.238647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.238681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.238854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.238894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.239123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.239318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.239495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.239666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.239830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.239974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.240019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.240176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.240224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.240365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.240401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.240610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.240645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.240789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.240824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.818 qpair failed and we were unable to recover it. 00:37:00.818 [2024-07-13 05:26:07.241001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.818 [2024-07-13 05:26:07.241034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.241196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.241377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.241572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.241604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.241775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.241948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.241986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.242149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.242182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.242341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.242373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.242499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.242531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.242674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.242707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.242843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.242880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.243961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.243994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.244133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.244165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.244350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.244383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.244526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.244558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.244709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.244741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.244904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.244937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.245077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.245111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.245272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.245437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.245470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.245630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.245664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.245818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.245888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.246100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.246136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.246270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.246304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.246478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.246512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.246680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.246716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.246882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.246917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.247064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.247099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.247265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.247298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.247488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.247521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.247684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.247717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.247885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.247919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.248078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.248111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.248273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.248307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.248443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.248477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.248666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.819 [2024-07-13 05:26:07.248698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:00.819 qpair failed and we were unable to recover it. 00:37:00.819 [2024-07-13 05:26:07.248839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.248880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.249037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.249085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.249228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.249263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.249427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.249460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.249627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.249667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.249840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.249881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.250932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.250967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.251131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.251164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.251356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.251390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:00.820 [2024-07-13 05:26:07.251578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.820 [2024-07-13 05:26:07.251611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:00.820 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.251853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.251895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.252094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.252253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.252456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.252650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.252820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.252979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.253167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.253378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.253546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.253769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.253953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.253988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.254158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.254192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.254322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.254357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.254511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.254544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.254713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.254748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.254939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.254988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.255157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.255193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.255359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.255393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.255554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.255588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.255758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.255791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.255953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.255987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.256166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.256199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.256339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.256373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.256541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.256575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.256735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.256768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.256917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.256951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.257114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.257148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.257296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.257330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.257503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.104 [2024-07-13 05:26:07.257542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.104 qpair failed and we were unable to recover it. 00:37:01.104 [2024-07-13 05:26:07.257683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.257717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.257857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.257897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.258066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.258238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.258271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.258436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.258470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.258657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.258691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.258842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.258882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.259021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.259055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.259247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.259280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.259417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.259450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.259622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.259656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.259819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.259853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.260956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.260989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.261126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.261158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.261318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.261351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.261482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.261514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.261679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.261714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.261893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.261942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.262089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.262124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.262289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.262323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.262489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.262522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.262660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.262693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.262858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.262897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.263043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.263079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.263237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.263460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.263493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.263628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.263663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.263850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.263889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.264093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.264279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.264480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.264665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.264862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.264998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.265031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.265189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.265228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.105 qpair failed and we were unable to recover it. 00:37:01.105 [2024-07-13 05:26:07.265408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.105 [2024-07-13 05:26:07.265441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.265575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.265609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.265781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.265817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.265986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.266020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.266182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.266216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.266349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.266383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.266564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.266597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.266760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.266794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.266983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.267031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.267186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.267233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.267410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.267445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.267587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.267620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.267768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.267802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.267979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.268175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.268372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.268566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.268738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.268943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.268979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.269153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.269187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.269353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.269571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.269605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.269782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.269816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.269979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.270026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.270167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.270200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.270365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.270398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.270571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.270606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.270772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.270806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.270985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.271160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.271385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.271558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.271756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.271924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.271957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.272095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.272127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.272291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.272324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.272461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.272493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.272633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.272665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.272852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.272892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.273028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.273065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.273223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.106 [2024-07-13 05:26:07.273256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.106 qpair failed and we were unable to recover it. 00:37:01.106 [2024-07-13 05:26:07.273398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.273430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.273566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.273598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.273760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.273793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.273956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.273989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.274125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.274158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.274320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.274353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.274517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.274550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.274735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.274767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.274924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.274957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.275091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.275311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.275343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.275500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.275533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.275724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.275757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.275894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.275927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.276070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.276103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.276291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.276323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.276513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.276545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.276691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.276884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.276932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.277123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.277171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.277347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.277382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.277530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.277563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.277726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.277759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.277926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.277961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.278130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.278163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.278331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.278368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.278539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.278573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.278735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.278767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.278931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.278965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.279103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.279136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.279302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.279335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.279464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.279496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.279631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.279665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.279824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.279856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.280023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.280071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.280251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.280287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.280481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.280516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.280655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.280688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.280824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.107 [2024-07-13 05:26:07.280859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.107 qpair failed and we were unable to recover it. 00:37:01.107 [2024-07-13 05:26:07.281054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.281101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.281295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.281330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.281502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.281534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.281696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.281729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.281894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.281926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.282086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.282118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.282213] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.108 [2024-07-13 05:26:07.282260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.108 [2024-07-13 05:26:07.282285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.108 [2024-07-13 05:26:07.282304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.108 [2024-07-13 05:26:07.282305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.282324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.108 [2024-07-13 05:26:07.282337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.282417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:01.108 [2024-07-13 05:26:07.282473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.282574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:01.108 [2024-07-13 05:26:07.282606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:01.108 [2024-07-13 05:26:07.282610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:01.108 [2024-07-13 05:26:07.282651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.282682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.282862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.282904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.283958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.283992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.284147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.284194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.284341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.284378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.284545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.284579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.284757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.284791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.284957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.284993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.285128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.285162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.285349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.285382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.285550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.285583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.285746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.285782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.285992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.286026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.286193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.286226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.286364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.286397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.286530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.108 [2024-07-13 05:26:07.286563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.108 qpair failed and we were unable to recover it. 00:37:01.108 [2024-07-13 05:26:07.286713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.286746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.286917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.286951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.287081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.287114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.287277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.287311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.287453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.287486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.287630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.287663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.287815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.287849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.288066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.288280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.288466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.288668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.288832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.288971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.289147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.289314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.289508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.289703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.289902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.289936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.290081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.290115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.290290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.290324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.290494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.290529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.290669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.290702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.290853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.290894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.291065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.291097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.291245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.291277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.291436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.291468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.291604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.291636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.291780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.291811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.292915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.292949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.293131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.293167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.293305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.293339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.293483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.293517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.293659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.293692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.293870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.109 [2024-07-13 05:26:07.293904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.109 qpair failed and we were unable to recover it. 00:37:01.109 [2024-07-13 05:26:07.294039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.294073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.294217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.294250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.294436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.294470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.294620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.294653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.294795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.294830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.295000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.295049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.295206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.295242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.295387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.295421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.295581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.295620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.295769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.295803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.296944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.296979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.297126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.297160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.297305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.297337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.297470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.297503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.297662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.297711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.297889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.297922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.298061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.298095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.298253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.298286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.298440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.298603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.298637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.298788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.298836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.299953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.299988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.300131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.300168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.300311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.300344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.300479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.300512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.300659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.300695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.300883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.300917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.301156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.301189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.301331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.301365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.110 [2024-07-13 05:26:07.301517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.110 [2024-07-13 05:26:07.301551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.110 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.301724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.301758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.301901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.301937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.302081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.302252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.302285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.302440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.302473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.302614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.302648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.302813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.302846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.303909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.304924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.304957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.305106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.305137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.305297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.305329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.305468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.305500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.305664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.305697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.305829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.305860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.306021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.306069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.306234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.306283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.306463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.306498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.306658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.306692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.306862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.306903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.307045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.307079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.307218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.307251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.307412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.307447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.307590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.307625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.307809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.307844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.308019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.308066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.308263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.308312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.308461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.308496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.308658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.308692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.308827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.308861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.111 [2024-07-13 05:26:07.309043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.111 [2024-07-13 05:26:07.309077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.111 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.309210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.309244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.309380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.309414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.309556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.309590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.309805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.309853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.310025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.310072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.310221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.310256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.310409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.310442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.310592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.310626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.310759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.310797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.311012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.311048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.311185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.311232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.311380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.311414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.311577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.311614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.311793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.311841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.312041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.312089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.312258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.312293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.312457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.312492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.312684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.312718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.312874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.312912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.313081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.313117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.313302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.313349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.313513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.313549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.313693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.313726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.313876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.313909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.314918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.314954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.315098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.315132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.315317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.315351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.315488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.315523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.315666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.315700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.315863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.315903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.316052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.112 [2024-07-13 05:26:07.316087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.112 qpair failed and we were unable to recover it. 00:37:01.112 [2024-07-13 05:26:07.316221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.316254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.316389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.316422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.316555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.316588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.316736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.316769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.316926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.316974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.317170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.317204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.317370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.317404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.317568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.317602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.317748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.317782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.317922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.317956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.318146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.318180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.318333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.318366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.318524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.318561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.318745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.318777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.318916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.318949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.319115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.319147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.319284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.319316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.319483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.319515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.319664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.319697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.319832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.319883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.320034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.320082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.320259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.320294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.320426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.320459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.320620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.320653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.320816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.320874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.321084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.321257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.321430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.321629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.321827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.321985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.322019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.113 [2024-07-13 05:26:07.322154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.113 [2024-07-13 05:26:07.322188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.113 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.322347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.322381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.322520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.322556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.322688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.322722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.322911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.322960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.323105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.323141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.323283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.323316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.323472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.323641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.323839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.323956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.324095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.324272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.324305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.324435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.324468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.324608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.324643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.324793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.324840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.325248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.325443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.325652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.325840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.325995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.326043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.326227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.326280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.326442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.326477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.326620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.326653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.326804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.326836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.326990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.327181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.327347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.327531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.327731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.327900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.327935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.328113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.328316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.328480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.328668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.328849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.328991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.329169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.329370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.329569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.329776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.114 qpair failed and we were unable to recover it. 00:37:01.114 [2024-07-13 05:26:07.329956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.114 [2024-07-13 05:26:07.329991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.330138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.330171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.330307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.330338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.330481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.330513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.330663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.330698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.330864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.330904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.331058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.331093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.331265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.331299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.331440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.331474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.331616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.331650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.331826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.331861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.332072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.332263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.332438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.332608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.332802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.332974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.333154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.333326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.333495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.333696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.333896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.333930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.334068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.334104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.334268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.334302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.334465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.334498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.334640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.334675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.334805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.334839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.335918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.335967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.336124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.336159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.336358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.336392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.336529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.336733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.336768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.336933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.336968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.337105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.337139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.115 [2024-07-13 05:26:07.337286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.115 [2024-07-13 05:26:07.337321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.115 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.337478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.337511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.337672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.337705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.337874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.337923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.338076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.338124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.338320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.338355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.338506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.338539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.338707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.338742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.338894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.338947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.339088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.339266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.339299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.339435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.339469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.339637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.339671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.339826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.339859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.340027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.340074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.340255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.340291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.340432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.340466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.340639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.340673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.340807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.340840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.341917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.342096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.342130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.342264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.342298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.342439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.342471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.342631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.342664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.342818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.342853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.343026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.343060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.343213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.343260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.343405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.343442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.343582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.343617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.343787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.343821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.344940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.344973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.116 qpair failed and we were unable to recover it. 00:37:01.116 [2024-07-13 05:26:07.345104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.116 [2024-07-13 05:26:07.345136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.345269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.345300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.345436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.345468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.345629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.345660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.345807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.345855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.346964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.346997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.347138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.347170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.347334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.347367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.347527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.347559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.347698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.347731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.347892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.347926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.348918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.348951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.349129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.349177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.349348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.349396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.349566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.349600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.349738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.349772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.349939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.349974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.350120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.350153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.350325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.117 [2024-07-13 05:26:07.350358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.117 qpair failed and we were unable to recover it. 00:37:01.117 [2024-07-13 05:26:07.350519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.350552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.350695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.350727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.350873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.350906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.351943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.351978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.352134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.352182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.352378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.352426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.352578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.352612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.352747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.352780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.352943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.352988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.353147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.353318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.353351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.353494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.353527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.353677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.353709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.353857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.353900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.354873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.354906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.355049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.355081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.355266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.355299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.355457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.355489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.355618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.355650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.355807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.355856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.356018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.356066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.356219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.356267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.356450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.356484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.356620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.356653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.356833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.356873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.118 [2024-07-13 05:26:07.357896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.118 [2024-07-13 05:26:07.357929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.118 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.358063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.358096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.358257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.358289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.358429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.358462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.358628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.358660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.358807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.358855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.359136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.359183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.359352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.359388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.359526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.359559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.359701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.359734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.359908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.359941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.360075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.360109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.360270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.360303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.360441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.360473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.360635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.360668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.360805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.360838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.361946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.361980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.362150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.362184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.362327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.362360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.362522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.362555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.362695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.362728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.362861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.362899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.363051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.363099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.363254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.363300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.363444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.363477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.363637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.363670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.363837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.363876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.364045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.364093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.364268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.364304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.364444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.364478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.364638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.364672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.364840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.364895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.365086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.365135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.365325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.119 [2024-07-13 05:26:07.365362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.119 qpair failed and we were unable to recover it. 00:37:01.119 [2024-07-13 05:26:07.365531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.365577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.365725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.365759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.365925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.365959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.366102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.366135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.366291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.366328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.366545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.366578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.366750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.366783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.366937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.366971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.367128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.367176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.367336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.367372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.367509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.367544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.367677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.367711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.367845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.367885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.368031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.368064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.368217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.368250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.368414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.368448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.368593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.368782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.368830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.369042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.369091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.369254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.369308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.369461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.369495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.369631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.369665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.369832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.369872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.370021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.370055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.370206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.370241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.370430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.370464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.370618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.370652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.370811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.371030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.371079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.371267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.371315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.371460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.371494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.371644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.371835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.371875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.372048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.372084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.372247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.372280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.372452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.372487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.372656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.372690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.372861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.372900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.373064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.373098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.373252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.120 [2024-07-13 05:26:07.373285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.120 qpair failed and we were unable to recover it. 00:37:01.120 [2024-07-13 05:26:07.373434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.373482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.373637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.373672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.373816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.373850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.374038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.374212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.374418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.374602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.374806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.374981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.375018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.375198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.375232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.375394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.375428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.375597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.375632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.375795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.375829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.375996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.376167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.376338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.376536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.376721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.376921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.376968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.377124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.377166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.377334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.377370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.377507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.377542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.377678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.377712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.377843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.377882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.378934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.378968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.379138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.379172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.379311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.379345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.379478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.379513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.379673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.379708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.379859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.379923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.380102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.380151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.380306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.380343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.121 qpair failed and we were unable to recover it. 00:37:01.121 [2024-07-13 05:26:07.380479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.121 [2024-07-13 05:26:07.380513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.380681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.380715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.380854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.380894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.381106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.381316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.381502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.381673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.381997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.382171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.382363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.382531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.382706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.382898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.382946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.383099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.383133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.383301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.383334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.383470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.383503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.383660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.383709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.383898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.383946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.384133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.384180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.384330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.384366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.384512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.384547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.384679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.384718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.384880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.384928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.385080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.385116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.385266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.385304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.385444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.385477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.385632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.385681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.385844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.385900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.386055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.386090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.386224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.386258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.386393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.386426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.386589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.386623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.122 [2024-07-13 05:26:07.386759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.122 [2024-07-13 05:26:07.386795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.122 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.386951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.386986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.387129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.387181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.387322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.387356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.387524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.387556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.387703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.387736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.387877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.387912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.388912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.388945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.389156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.389189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.389319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.389489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.389522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.389676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.389725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.389946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.389995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.390170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.390219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.390399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.390433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.390585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.390618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.390762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.390795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.390940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.390973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.391134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.391167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.391306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.391339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.391472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.391505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.391638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.391670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.391852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.391905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.392058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.392094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.392244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.392284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.392429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.392462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.392594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.392627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.123 [2024-07-13 05:26:07.392878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.123 [2024-07-13 05:26:07.392912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.123 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.393090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.393254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.393430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.393621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.393802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.393972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.394169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.394204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.394342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.394375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.394623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.394656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.394787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.394820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.394978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.395147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.395338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.395503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.395701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.395872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.395905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.396053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.396253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.396288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.396451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.396484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.396615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.396648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.396828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.396860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.397951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.397984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.398139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.398171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.398297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.398330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.398473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.398506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.398658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.398691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.398832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.398871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.124 [2024-07-13 05:26:07.399000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.124 [2024-07-13 05:26:07.399033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.124 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.399199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.399232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.399363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.399396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.399555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.399588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.399722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.399910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.399943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.400953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.400986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.401117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.401150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.401320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.401353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.401501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.401534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.401663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.401695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.401840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.401880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.402010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.402043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.402214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.402247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.402390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.402423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.402564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.402613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.402787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.402823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.403053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.403222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.403426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.403612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.403783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.403953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.404165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.404335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.404551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.404734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.404919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.404954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.405130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.405164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.405319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.405391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.405560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.405593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.405731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.405764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.405902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.405936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.406091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.125 [2024-07-13 05:26:07.406123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.125 qpair failed and we were unable to recover it. 00:37:01.125 [2024-07-13 05:26:07.406281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.406313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.406461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.406493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.406635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.406667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.406872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.406921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.407075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.407110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.407252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.407292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.407437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.407471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.407613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.407648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.407838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.407878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.408063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.408258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.408479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.408654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.408826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.408982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.409031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.409190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.409237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.409391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.409428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.409598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.409633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.409796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.409829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.409992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.410199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.410368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.410563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.410770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.410961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.410995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.411137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.411169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.411300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.411333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.411492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.411525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.411680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.411714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.411870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.411924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.412106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.412154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.412330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.126 [2024-07-13 05:26:07.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.126 qpair failed and we were unable to recover it. 00:37:01.126 [2024-07-13 05:26:07.412515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.412549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.412710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.412743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.412912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.412946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.413080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.413112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.413275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.413308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.413455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.413488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.413637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.413673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.413862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.413917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.414102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.414297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.414471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.414644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.414986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.415185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.415367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.415534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.415693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.415870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.415903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.416931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.416965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.417116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.417149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.417292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.127 [2024-07-13 05:26:07.417326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.127 qpair failed and we were unable to recover it. 00:37:01.127 [2024-07-13 05:26:07.417468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.417501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.417646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.417679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.417811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.417843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.418888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.418922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.419052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.419085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.419229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.419261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.419449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.419481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.419670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.419702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.419846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.419887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.420914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.420949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.421117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.421151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.421289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.421323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.421486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.421518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.421655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.421687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.421835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.421890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.422045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.422081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.422246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.422285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.422454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.422487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.422648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.422681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.422855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.422896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.128 [2024-07-13 05:26:07.423060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.128 [2024-07-13 05:26:07.423104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.128 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.423239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.423272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.423405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.423438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.423583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.423616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.423779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.423811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.423943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.423977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.424123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.424156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.424285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.424317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.424503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.424633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.424844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.424900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.425071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.425107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.425258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.425292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.425457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.425491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.425638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.425671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.425836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.425879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.426051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.426233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.426606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.426780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.426972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.427144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.427329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.427504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.427693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.427874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.427909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.428087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.428257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.428452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.428625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.428825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.428979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.429012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.429176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.429209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.429351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.429386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.429550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.429584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.429776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.430011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.430059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.430196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.430232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.430364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.430397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.430562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.430595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.129 qpair failed and we were unable to recover it. 00:37:01.129 [2024-07-13 05:26:07.430726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.129 [2024-07-13 05:26:07.430759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.430942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.430991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.431142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.431177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.431341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.431375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.431506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.431540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.431680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.431722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.431856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.431896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.432088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.432267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.432444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.432642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.432818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.432966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.433146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.433342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.433513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.433687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.433889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.434084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.434117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.434271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.434304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.434443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.434475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.434634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.434667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.434803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.434843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.435070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.435265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.435463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.435630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.435818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.435991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.436039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.436218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.436251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.436394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.436427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.436586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.436619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.436793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.436825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.436968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.437000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.437157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.437190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.437350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.130 [2024-07-13 05:26:07.437383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.130 qpair failed and we were unable to recover it. 00:37:01.130 [2024-07-13 05:26:07.437530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.437563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.437698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.437733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.437903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.437951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.438134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.438330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.438363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.438495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.438528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.438683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.438715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.438871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.438904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.439939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.440122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.440171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.440312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.440347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.440502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.440693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.440726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.440906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.440941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.441079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.441124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.441259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.441292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.441431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.441465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.441633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.441667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.441816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.441864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.442950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.442984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.443129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.443311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.443344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.443514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.443547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.443722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.443755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.131 [2024-07-13 05:26:07.443898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.131 [2024-07-13 05:26:07.443931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.131 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.444069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.444101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.444259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.444292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.444418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.444450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.444598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.444646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.444814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.444850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.445951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.445984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.446147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.446180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.446316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.446348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.446472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.446505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.446665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.446698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.446828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.446881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.447962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.447995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.448139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.448171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.448320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.448353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.448510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.448542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.448700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.448749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.448908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.448956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.449095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.449130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.449272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.449467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.449500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.449640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.449675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.449804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.449843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.450030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.450079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.450267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.450467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.450501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.450637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.450671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.450819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.451042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.451089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.451263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.132 [2024-07-13 05:26:07.451299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.132 qpair failed and we were unable to recover it. 00:37:01.132 [2024-07-13 05:26:07.451473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.451507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.451642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.451675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.451834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.452012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.452047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.452217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.452250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.455883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.455924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.456077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.456112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.456298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.456332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.456482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.456516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.456656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.456691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.456844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.456886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.457074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.457107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.457260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.457294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.457421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.457455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.457640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.457673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.457835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.457875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.458067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.458240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.458422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.458645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.458831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.458981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.459146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.459341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.459542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.459727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.459904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.459938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.460072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.460107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.460294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.460328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.460460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.460493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.460679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.460712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.460870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.460904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.461945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.461990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.462133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.462167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.462320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.462354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.133 [2024-07-13 05:26:07.462522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.133 [2024-07-13 05:26:07.462555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.133 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.462706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.462739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.462881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.462915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.463049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.463082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.463236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.463284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.463466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.463500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.463672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.463720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.463873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.463910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.464958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.464992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.465175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.465222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.465478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.465514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.465659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.465693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.465856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.465895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.466067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.466100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.466239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.466272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.466438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.466471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.466626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.466658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.466814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.466862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.467068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.467299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.467469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.467643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.467825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.467977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.468169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.468338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.468539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.468710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.468912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.468946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.469091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.469125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.469267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.469301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.469481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.469515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.469693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.469726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.469882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.469917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.470069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.470117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.470281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.134 [2024-07-13 05:26:07.470315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.134 qpair failed and we were unable to recover it. 00:37:01.134 [2024-07-13 05:26:07.470464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.470497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.470649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.470682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.470815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.470848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.471897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.471931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.472953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.472986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.473114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.473146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.473302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.473334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.473468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.473500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.473687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.473720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.473855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.473898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.474876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.474909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.475085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.475133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.475279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.475314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.475462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.475496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.475652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.475686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.475834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.475889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.476950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.476984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.477115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.477147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.477308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.477341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.477475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.477507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.477660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.135 [2024-07-13 05:26:07.477693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.135 qpair failed and we were unable to recover it. 00:37:01.135 [2024-07-13 05:26:07.477845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.477887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.478932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.478965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.479122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.479282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.479315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.479456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.479490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.479621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.479654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.479807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.479840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.480004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.480044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.480193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.480229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.481881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.481920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.482095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.482130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.482272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.482307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.482490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.482524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.482679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.482712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.482885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.482919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.483106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.483142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.483294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.483328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.483495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.483529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.483673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.483706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.483871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.483919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.484090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.484138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.484288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.484323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.484466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.484500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.484645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.484679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.484816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.484850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.485047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.485088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.485291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.485325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.485494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.485530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.486881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.486919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.136 [2024-07-13 05:26:07.487086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.136 [2024-07-13 05:26:07.487120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.136 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.487289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.487323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.487455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.487489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.487643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.487678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.487841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.487882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.488025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.488059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.488221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.488254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.488387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.488419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.488590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.488625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.488774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.488807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.491886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.491949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.492130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.492177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.492370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.492415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.492602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.492646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.492810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.492857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.493120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.493157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.493291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.493336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.493496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.493529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.493659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.493692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.493849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.493888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.494135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.494168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.494406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.494439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.494588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.494621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.494804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.494853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.495063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.495122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.495347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.495393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.495580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.495625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.495787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.495833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.496021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.496065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.496244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.496280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.496451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.496485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.496615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.496648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.496815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.497888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.497924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.498055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.498088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.498247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.498280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.137 [2024-07-13 05:26:07.498411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.137 [2024-07-13 05:26:07.498444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.137 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.498594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.498628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.498758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.498791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.498947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.498981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.499125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.499159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.499303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.499336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.499658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.499691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.499822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.499856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.500035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.500069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.500218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.500266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.500414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.500466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.500625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.500659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.500789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.500821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.501832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.501870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.502845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.502886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.503066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.503243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.503419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.503636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.503813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.503966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.504131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.504319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.504484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.504689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.504898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.504935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.505098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.505131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.505286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.505319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.505446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.505479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.505637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.138 [2024-07-13 05:26:07.505669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.138 qpair failed and we were unable to recover it. 00:37:01.138 [2024-07-13 05:26:07.505807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.505840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.505990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.506160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.506319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.506511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.506676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.506871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.506905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.507068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.507101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.507237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.507270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.507480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.507513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.507650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.507682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.507815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.507848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.508912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.508962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.509118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.509152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.509314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.509349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.509494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.509527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.509696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.509729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.509864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.509904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.510913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.510949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.511100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.511135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.511299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.511333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.511461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.511494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.511657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.511690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.511855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.511894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.512093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.512293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.512458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.512649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.512841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.512998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.513030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.513209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.139 [2024-07-13 05:26:07.513242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.139 qpair failed and we were unable to recover it. 00:37:01.139 [2024-07-13 05:26:07.513384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.513417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.513552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.513584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.513786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.513821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.513986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.514034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.514216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.514249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.514418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.514452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.514588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.514621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.514773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.514806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.514987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.515156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.515322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.515519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.515694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.515875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.515937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.516918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.516951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.517118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.517168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.517330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.517365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.517506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.517540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.517671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.517705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.517938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.517972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.518114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.518147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.518313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.518346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.518482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.518515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.518669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.518703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.518869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.518904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.519066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.519099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.519239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.519272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.519410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.519443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.519600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.519638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.140 qpair failed and we were unable to recover it. 00:37:01.140 [2024-07-13 05:26:07.519785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.140 [2024-07-13 05:26:07.519817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.519969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.520150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.520343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.520510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.520683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.520883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.520929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.521098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.521131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.521307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.521342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.521482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.521515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.521679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.521712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.521888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.521926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.522076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.522109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.522279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.522312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.522450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.522483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.522646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.522679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.522814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.522847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.523075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.523245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.523423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.523604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.523769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.523976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.524158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.524335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.524529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.524733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.524939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.524975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.525117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.525157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.525318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.525355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.525514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.525547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.525684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.525717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.525878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.525913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.526051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.526085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.526256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.526290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.526452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.526497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.526646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.526680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.526829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.527015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.527050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.527189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.527227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.527395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.141 [2024-07-13 05:26:07.527429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.141 qpair failed and we were unable to recover it. 00:37:01.141 [2024-07-13 05:26:07.527587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.527620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.527782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.527815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.527970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.528145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.528371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.528567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.528739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.528916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.528951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.529090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.529123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.529247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.529280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.529482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.529516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.529648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.529681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.529933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.529967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.530106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.530320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.530354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.530490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.530523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.530660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.530693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.530823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.530856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.531930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.531964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.532094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.532128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.532290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.532323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.532473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.532507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.532668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.532700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.532853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.532894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.533074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.533246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.533495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.533691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.533855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.533996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.534175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.534339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.534531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.534699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.534882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.534916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.142 [2024-07-13 05:26:07.535067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.142 [2024-07-13 05:26:07.535100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.142 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.535257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.535291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.535423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.535456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.535628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.535662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.535795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.535828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.535986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.536174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.536346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.536536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.536692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.536879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.536912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.537056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.537089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.537232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.537265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.537505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.537538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.537675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.537708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.537874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.537908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.538049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.538083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.538269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.538303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.538469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.538514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.538665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.538698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.538828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.538861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.539851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.539892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.540917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.540951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.541078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.541111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.541242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.541275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.541416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.541448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.541689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.541723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.541888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.541921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.542059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.542092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.542232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.542265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.542420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.542453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.143 [2024-07-13 05:26:07.542642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.143 [2024-07-13 05:26:07.542675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.143 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.542826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.542859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.543921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.543955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.544085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.544119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.544277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.544310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.544549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.544582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.544750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.544782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.544934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.544968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.545101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.545134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.545374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.545407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.545533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.545566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.545706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.545739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.545886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.545919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.546092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.546278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.546461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.546636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.546818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.546970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.547135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.547305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.547494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.547681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.547850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.547897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.548936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.548970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.144 [2024-07-13 05:26:07.549113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.144 [2024-07-13 05:26:07.549146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.144 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.549277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.549309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.549435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.549467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.549610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.549643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.549770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.549802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.549959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.549993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.550159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.550203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.550337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.550371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.550611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.550643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.550801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.550834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.550970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.551153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.551326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.551512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.551690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.551883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.551916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.552056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.552088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.552327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.552360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.552495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.552529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.552684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.552717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.552879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.552913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.553943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.553978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.554158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.554191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.554323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.554356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.554515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.554552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.554724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.554757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.554897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.554931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.555098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.555274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.555438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.555627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.555802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.555972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.556005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.556193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.556226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.556361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.556394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.556529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.145 [2024-07-13 05:26:07.556562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.145 qpair failed and we were unable to recover it. 00:37:01.145 [2024-07-13 05:26:07.556689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.556722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.556886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.556921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.557102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.557466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.557653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.557820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.557974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.558141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.558299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.558496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.558680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.558840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.558882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.559925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.559960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.560136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.560308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.560481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.560679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.560848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.560995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.561155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.561363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.561524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.561719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.561892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.561937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.562960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.562993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.563120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.563152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.563289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.563322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.563474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.563507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.563664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.563696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.563827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.563859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.564009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.146 [2024-07-13 05:26:07.564042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.146 qpair failed and we were unable to recover it. 00:37:01.146 [2024-07-13 05:26:07.564171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.564205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.564362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.564395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.564549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.564582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.564741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.564773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.564907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.564941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.565084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.565117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.565251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.565284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.565531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.565564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.565698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.565869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.565902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.566097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.566273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.566474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.566649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.566837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.566978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.567139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.567317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.567504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.567671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.567839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.567876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.568051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.568257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.568446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.568608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.568827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.568968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.569170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.569369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.569575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.569735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.569898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.569931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.570893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.570926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.571060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.571093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.571254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.571287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.571425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.571459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.571601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.571634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.571792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.571825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.572076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.147 [2024-07-13 05:26:07.572110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.147 qpair failed and we were unable to recover it. 00:37:01.147 [2024-07-13 05:26:07.572284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.572317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.572449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.572483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.572609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.572642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.572772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.572805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.572952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.572985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.573124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.573156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.573296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.573329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.573569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.573612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.573747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.573780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.573944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.573978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.574112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.574146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.574329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.574362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.574495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.574528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.574660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.574693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.574849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.574888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.575128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.575160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.575317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.575350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.575510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.575543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.575781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.575813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.575951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.575984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.576116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.576149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.576317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.576349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.576478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.576542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.576672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.576705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.576861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.576898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.577038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.577071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.577198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.577231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.577391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.577424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.577662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.577695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.577826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.577859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.578024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.578057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.578215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.578249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.148 [2024-07-13 05:26:07.578411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.148 [2024-07-13 05:26:07.578443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.148 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.578572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.578607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.578757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.578794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 A controller has encountered a failure and is being reset. 00:37:01.430 [2024-07-13 05:26:07.579012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.579068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.579217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.579260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.579446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.579481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.579630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.579665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.579817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.579851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.580946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.580980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.581122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.581156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.581299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.581332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.581478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.581512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.581653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.581687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.581825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.581858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.582893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.582927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.583102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.583267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.583436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.583599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.583816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.583966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.584134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.584328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.584525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.584711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.430 [2024-07-13 05:26:07.584889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.430 [2024-07-13 05:26:07.584923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.430 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.585055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.585088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.585247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.585280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.585469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.585503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.585626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.585660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.585834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.585872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.586039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.586206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.586419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.586580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.586801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.586972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.587851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.587993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.588193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.588363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.588538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.588704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.588940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.588975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.589116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.589150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.589315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.589350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.589485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.589518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.589679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.589712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.589880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.589914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.590932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.590966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.591117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.591150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.591290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.591328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.591457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.591491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.591660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.591693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.591854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.591893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.592050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.592083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.592215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.431 [2024-07-13 05:26:07.592247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.431 qpair failed and we were unable to recover it. 00:37:01.431 [2024-07-13 05:26:07.592381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.592427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.592564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.592596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.592724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.592757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.592889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.592923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.593092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.593260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.593453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.593617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.593812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.593971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.594135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.594309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.594501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.594658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.594828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.594861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.595890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.595923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.596956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.596989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.597127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.597159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.597314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.597347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.597497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.597682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.597842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.597881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.598927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.598975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.599148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.599185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.599326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.599360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.432 [2024-07-13 05:26:07.599522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.432 [2024-07-13 05:26:07.599555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.432 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.599744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.599779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.599914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.599978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.600136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.600170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.600333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.600367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.600530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.600565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.600703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.600736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.600878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.600911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.601920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.601953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.602115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.602275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.602469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.602667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.602837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.602970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.603166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.603336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.603503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.603663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.603837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.603874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.604050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.604242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.604404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.604593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.604829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.604973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.605135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.605328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.605532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.605722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.605896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.605930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.606122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.606286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.606605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.433 [2024-07-13 05:26:07.606770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.433 qpair failed and we were unable to recover it. 00:37:01.433 [2024-07-13 05:26:07.606921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.606954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.607119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.607303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.607467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.607631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.607829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.607972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.608150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.608339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.608528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.608719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.608888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.608921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.609945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.609979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.610128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.610160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.610297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.610330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.610521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.610555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.610688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.610721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.610856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.610910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.611050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.611083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.611265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.611314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.611485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.611520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.611655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.611689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.611828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.611861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.612012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.612046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.612208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.612241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.612402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.612435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.612595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.612628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:01.434 qpair failed and we were unable to recover it. 00:37:01.434 [2024-07-13 05:26:07.612788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.434 [2024-07-13 05:26:07.612828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.612974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.613148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.613332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.613492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.613674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.613879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.613913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.614918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.614952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.615093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.615126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.615266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.615300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.615461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.615494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.615657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.615689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.615848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.615887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.616872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.616905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.617883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.617918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.618945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.618978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.619116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.619148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.619288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.619320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.619464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.619496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.619633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.619666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.619828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.435 [2024-07-13 05:26:07.619861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.435 qpair failed and we were unable to recover it. 00:37:01.435 [2024-07-13 05:26:07.620015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.620185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.620367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.620534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.620753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.620937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.620971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.621152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.621352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.621519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.621690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.621858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.621996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.622162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.622323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.622522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.622685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.622892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.622925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.623940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.623973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.624132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.624165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.624292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.624325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.624466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.624498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.624664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.624696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.624847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.624884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.625863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.625901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.626896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.626929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.436 [2024-07-13 05:26:07.627063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.436 [2024-07-13 05:26:07.627099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.436 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.627237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.627269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.627402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.627435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.627569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.627602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.627738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.627772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.627912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.627946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.628116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.628278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.628473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.628817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.628983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.629154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.629329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.629497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.629683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.629845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.629882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.630949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.630982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.631142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.631346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.631379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.631537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.631569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.631755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.631787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.631923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.631958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.632100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.632132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.632282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.632315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.632476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.632509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.632694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.632727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.632863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.632903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.633918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.634088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.634121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.634276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.634314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.437 qpair failed and we were unable to recover it. 00:37:01.437 [2024-07-13 05:26:07.634494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.437 [2024-07-13 05:26:07.634526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.634654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.634687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.634816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.634849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.634992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.635181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.635342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.635510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.635734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.635928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.635962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.636092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.636124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.636283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.636316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.636446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.636480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.636630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.636664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.636806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.636839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.637931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.637965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.638125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.638157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.638313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.638346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.638506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.638539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.638673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.638705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.638834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.638871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.639954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.639988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.640117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.640150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.640311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.640344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.640495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.640528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.438 [2024-07-13 05:26:07.640660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.438 [2024-07-13 05:26:07.640693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.438 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.640851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.640890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.641948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.641981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.642122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.642289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.642321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.642480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.642513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.642659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.642691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.642825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.642858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.643895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.643929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.644938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.644972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.645132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.645165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.645325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.645358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.645490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.645523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.645656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.645689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.645848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.645888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.646939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.646972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.647125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.647157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.647317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.647349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.647508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.647671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.647703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.647858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.439 [2024-07-13 05:26:07.647895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.439 qpair failed and we were unable to recover it. 00:37:01.439 [2024-07-13 05:26:07.648059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.648092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.648230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.648263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.648410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.648443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.648600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.648632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.648760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.648797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.648966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.649133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.649309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.649483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.649695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.649899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.649932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.650061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.650094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.650226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.650259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.650429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.650461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.650618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.650652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.650803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.651917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.651952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.652134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.652324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.652488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.652651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.652839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.652998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.653173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.653346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.653548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.653741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.653921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.653955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.654116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.654148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.654296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.654329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.654467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.654499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.654640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.654829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.654862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.655003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.655035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.440 qpair failed and we were unable to recover it. 00:37:01.440 [2024-07-13 05:26:07.655171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.440 [2024-07-13 05:26:07.655204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.655334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.655368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.655558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.655590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.655717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.655749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.655876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.655909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.656940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.656973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.657099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.657132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.657261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.657294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.657429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.657461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.657624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.657657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.657815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.657848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.658952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.658986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.659117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.659150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.659306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.659339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.659499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.659531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.659674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.659707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.659879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.659914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.660089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.660296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.660462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.660650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.660823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.660982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.661145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.661307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.661501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.661667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.661843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.661883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.662026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.662059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.662193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.662226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.662361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.441 [2024-07-13 05:26:07.662394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.441 qpair failed and we were unable to recover it. 00:37:01.441 [2024-07-13 05:26:07.662557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.662735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.662768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.662911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.662945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.663961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.663994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.664150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.664183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.664328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.664372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.664532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.664564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.664694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.664727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.664861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.664899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.665925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.665959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.666139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.666307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.666472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.666638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.666807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.666970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.667128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.667315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.667564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.667725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.667917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.667951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.668932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.668965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.669102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.669135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.669289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.669322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.669458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.669491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.442 [2024-07-13 05:26:07.669633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.442 [2024-07-13 05:26:07.669666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.442 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.669853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.669893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.670885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.670919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.671085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.671282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.671446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.671644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.671836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.671976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.672822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.672991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.673153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.673328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.673489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.673655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.673843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.673886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.674899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.674932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.675074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.675107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.675250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.675283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.443 [2024-07-13 05:26:07.675421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.443 [2024-07-13 05:26:07.675454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.443 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.675596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.675639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.675823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.675856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.676833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.676871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.677890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.677923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.678910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.678944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.679955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.679988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.680155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.680345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.680515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.680678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.680844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.680996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.681162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.681340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.681522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.681687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.681854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.681905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.682045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.682078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.682208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.682241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.682377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.444 [2024-07-13 05:26:07.682410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.444 qpair failed and we were unable to recover it. 00:37:01.444 [2024-07-13 05:26:07.682570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.682603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.682734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.682909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.682943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.683103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.683136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.683295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.683327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.683511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.683544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.683684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.683717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.683848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.683885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.684948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.684982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.685143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.685176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.685334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.685366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.685506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.685539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.685679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.685712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.685838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.685875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.686885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.686930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.687091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.687291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.687461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.687636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.687822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.687986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.688148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.688308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.688470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.688642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.688814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.688847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.689056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.689257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.689428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.689614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.445 [2024-07-13 05:26:07.689782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.445 qpair failed and we were unable to recover it. 00:37:01.445 [2024-07-13 05:26:07.689916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.689949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.690103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.690136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.690275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.690307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.690473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.690506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.690648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.690680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.690805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.690837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.691857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.692938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.692970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.693966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.693999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.694165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.694334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.694503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.694668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.694827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.694977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.695147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.695323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.695493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.695671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.695851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.695904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.696072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.696260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.696428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.696599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.696822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.446 [2024-07-13 05:26:07.696974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.446 [2024-07-13 05:26:07.697018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.446 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.697162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.697195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.697330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.697372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.697521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.697554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.697688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.697721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.697871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.697905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.698892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.698936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.699956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.699990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.700121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.700154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.700288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.700320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.700475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.700508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.700651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.700684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.700852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.700892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.701934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.701968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.702160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.702324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.702520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.702684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.702848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.702990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.447 [2024-07-13 05:26:07.703023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.447 qpair failed and we were unable to recover it. 00:37:01.447 [2024-07-13 05:26:07.703158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.703191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.703319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.703357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.703484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.703517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.703652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.703685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.703821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.703853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.704891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.704925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.705099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.705268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.705452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.705659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.705829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.705971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.706149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.706311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.706501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.706667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.706854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.706894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.707062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.707222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.707416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.707578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.707776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.707962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.708146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.708321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.708512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.708683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.708849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.708894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.709913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.709947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.710079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.710112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.710268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.448 [2024-07-13 05:26:07.710301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.448 qpair failed and we were unable to recover it. 00:37:01.448 [2024-07-13 05:26:07.710435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.710473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.710621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.710653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.710794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.710827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.710975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.711149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.711345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.711545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.711708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.711891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.711925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.712088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.712257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.712431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.712604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.712814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.712970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.713140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.713338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.713510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.713691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.713892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.713925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.714965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.714998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.715133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.715166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.715310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.715343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.715505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.715537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.715668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.715700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.715855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.715894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.716940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.716974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.717109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.717142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.717286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.717319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.717510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.449 [2024-07-13 05:26:07.717672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.449 [2024-07-13 05:26:07.717709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.449 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.717841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.717880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.718889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.718922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.719916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.719949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.720940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.720985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.721117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.721150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.721294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.721327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.721469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.721502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.721637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.721670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.721830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.721863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.722935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.722969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.723946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.723994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.724162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.724196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.724339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.724373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.724535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.724569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.724722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.724759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.450 qpair failed and we were unable to recover it. 00:37:01.450 [2024-07-13 05:26:07.724901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.450 [2024-07-13 05:26:07.724934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.725072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.725105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.725264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.725297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.725437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.725654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.725687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.725836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.725873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.726854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.726892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.727925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.727959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.728132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.728297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.728461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.728660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.728823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.728967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.729144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.729321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.729518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.729696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.729863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.729907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.730921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.451 [2024-07-13 05:26:07.730954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.451 qpair failed and we were unable to recover it. 00:37:01.451 [2024-07-13 05:26:07.731123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.731156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.731291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.731323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.731452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.731485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.731609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.731641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.731791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.731829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.731971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.732142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.732300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.732474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.732643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.732806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.732839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.733020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.733080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.733295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.733342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.733501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.733546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.733766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.733810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.733965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.733999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.734135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.734168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.734303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.734335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.734475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.734508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.734636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.734669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.734816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.734848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.735908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.735941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.736072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.736104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.736263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.736296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.736439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.736472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.736601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.736632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.736795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.736843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.737028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.737073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.737261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.737311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.737503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.737549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.737714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.737747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.737909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.737942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.738100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.738133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.738275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.738307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.738465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.738497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.738637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.452 [2024-07-13 05:26:07.738670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.452 qpair failed and we were unable to recover it. 00:37:01.452 [2024-07-13 05:26:07.738808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.738840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.738982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.739161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.739360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.739530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.739696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.739876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.739909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.740918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.740951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.741931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.741964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.742153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.742315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.742504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.742678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.742844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.742991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.743151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.743341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.743559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.743725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.743898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.743932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.744964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.744998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.745130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.745163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.745307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.745340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.745497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.745542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.745672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.745705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.453 [2024-07-13 05:26:07.745835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.453 [2024-07-13 05:26:07.745872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.453 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.746062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.746227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.746436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.746608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.746826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.746968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.747157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.747313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.747501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.747671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.747834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.747871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.748916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.748950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.749965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.749998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.750160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.750326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.750505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.750669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.750830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.750978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.751156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.751340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.751512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.751684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.751853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.751893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.752888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.752922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.454 qpair failed and we were unable to recover it. 00:37:01.454 [2024-07-13 05:26:07.753061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.454 [2024-07-13 05:26:07.753093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.753252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.753285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.753418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.753455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.753602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.753634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.753760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.753793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.753925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.753959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.754919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.754951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.755078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.755111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.755244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.755277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.755408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.755441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.755611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.755644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.755810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.755843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.756875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.756909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.757109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.757296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.757499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.757853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.757997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.758190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.758353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.758521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.758692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.758888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.758921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.759066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.759098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.759243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.759275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.759440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.759472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.455 [2024-07-13 05:26:07.759635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.455 [2024-07-13 05:26:07.759668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.455 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.759801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.759833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.759985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.760150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.760314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.760520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.760701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.760863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.760902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.761938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.761971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.762139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.762314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.762483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.762658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.762827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.762996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.763172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.763330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.763497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.763662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.763852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.763890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.764050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.764242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.764432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.764657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.764821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.764970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.765133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.765298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.765476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.765653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.765847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.765885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.766022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.766055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.766184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.766216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.456 [2024-07-13 05:26:07.766348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.456 [2024-07-13 05:26:07.766381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.456 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.766507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.766540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.766667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.766700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.766860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.766910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.767897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.767931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.768094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.768138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.768293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.768326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.768487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.768520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.768675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.768708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.768850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.768888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.769917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.769951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.770122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.770318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.770499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.770668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.770836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.770975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.771136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.771300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.771470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.771689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.771848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.771886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.772914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.772947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.773095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.773128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.773290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.773323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.773450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.773482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.457 qpair failed and we were unable to recover it. 00:37:01.457 [2024-07-13 05:26:07.773613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.457 [2024-07-13 05:26:07.773651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.773823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.773857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.774911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.774945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.775946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.775979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.776129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.776162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.776291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.776324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.776484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.776516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.776670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.776702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.776847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.776885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.777919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.777953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.778084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.778117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.778251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.778283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.778469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.778502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.778673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.778706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.778837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.778875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.779059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.779253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.779412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.779640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.779834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.779994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.780172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.780361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.780549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.780738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.780899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.458 [2024-07-13 05:26:07.780931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.458 qpair failed and we were unable to recover it. 00:37:01.458 [2024-07-13 05:26:07.781090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.781124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.781256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.781289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.781451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.781484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.781616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.781648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.781776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.781810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.781967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.782144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.782320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.782480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.782670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.782898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.783950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.783983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.784120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.784153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.784296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.784328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.784475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.784508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.784679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.784712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.784862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.784900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.785911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.785945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.786075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.786108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.786251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.786283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.786429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.786462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.786635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.786669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.786831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.786875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.787898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.787931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.788092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.788125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.459 [2024-07-13 05:26:07.788281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.459 [2024-07-13 05:26:07.788313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.459 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.788439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.788473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.788616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.788649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.788774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.788806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.788976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.789166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.789363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.789542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.789704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.789896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.789929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.790124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.790292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.790472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.790641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.790971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.791161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.791361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.791555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.791754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.791912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.791945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.792096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.792131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.792265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.792298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.792483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.792517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.792647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.792680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.792823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.792857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.793951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.793986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.794128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.794167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.794301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.794335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.460 [2024-07-13 05:26:07.794496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.460 [2024-07-13 05:26:07.794531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.460 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.794700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.794735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.794896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.794930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.795932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.795966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.796103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.796137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.796293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.796327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.796459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.796662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.796704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.796840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.796880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.797063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.797237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.797407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.797611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.797807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.797971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.798130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.798304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.798499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.798693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.798878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.798911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.799076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.799111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.799273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.799307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.799472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.799506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.799638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.799672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.799804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.799837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.800932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.800966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.801161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.801327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.801521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.801694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.801853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.461 [2024-07-13 05:26:07.801989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.461 [2024-07-13 05:26:07.802023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.461 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.802171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.802207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.802366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.802401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.802537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.802574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.802734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.802769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.802914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.802961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.803131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.803165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.803356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.803391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.803524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.803558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.803723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.803758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.803923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.803958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.804100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.804135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.804280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.804315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.804460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.804494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.804668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.804702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.804837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.804878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.805915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.805950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.806111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.806145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.806285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.806320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.806464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.806499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.806642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.806676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.806820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.806854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.807050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.807243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.807444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.807644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.807816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.807987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.808176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.808361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.808534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.808733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.808913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.808953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.809117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.809151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.809308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.809343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.809514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.809549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.462 qpair failed and we were unable to recover it. 00:37:01.462 [2024-07-13 05:26:07.809679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.462 [2024-07-13 05:26:07.809713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.809863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.809902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.810079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.810285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.810451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.810661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.810826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.810970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.811141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.811333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.811559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.811741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.811937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.811972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.812145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.812313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.812483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.812659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.812839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.812982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.813161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.813362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.813535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.813708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.813897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.813933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.814070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.814104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.814278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.814312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.814455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.814489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.814628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.814662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.814825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.814877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.815016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.815050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.815190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.815225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.815388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.815422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.815582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.815617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.815805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.815840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.816937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.816972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.817142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.463 [2024-07-13 05:26:07.817176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.463 qpair failed and we were unable to recover it. 00:37:01.463 [2024-07-13 05:26:07.817315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.817350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.817483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.817517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.817657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.817690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.817835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.817875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.818038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.818073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.818459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.818492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.818630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.818664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.818831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.818871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.819010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.819044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:01.464 [2024-07-13 05:26:07.819213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.819248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:01.464 [2024-07-13 05:26:07.819389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.819423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:01.464 [2024-07-13 05:26:07.819579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:01.464 [2024-07-13 05:26:07.819613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.464 [2024-07-13 05:26:07.819776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.819983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.820166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.820340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.820507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.820692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.820881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.820926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.821060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.821093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.821237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.821270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.821460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.821494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.821651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.821685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.821846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.821902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.822960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.822994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.823137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.823170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.823329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.823363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.823536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.464 [2024-07-13 05:26:07.823569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.464 qpair failed and we were unable to recover it. 00:37:01.464 [2024-07-13 05:26:07.823724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.823763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.823927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.823962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.824103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.824147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.824296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.824330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.824464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.824497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.824629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.824663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.824824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.824857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.825046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.825256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.825451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.825652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.825834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.825985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.826167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.826371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.826563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.826745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.826957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.826991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.827163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.827208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.827369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.827403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.827535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.827569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.827709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.827742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.827881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.827940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.828071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.828104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.828271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.828304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.828438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.828471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.828627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.828661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.828834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.828890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.829082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.829127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.829297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.829330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.829491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.829524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.829663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.829697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.829884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.829929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.830097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.830262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.830455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.830626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.830815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.830983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.831017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.831150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.831184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.831321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.465 [2024-07-13 05:26:07.831359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.465 qpair failed and we were unable to recover it. 00:37:01.465 [2024-07-13 05:26:07.831526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.831560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.831726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.831759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.831911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.831946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.832083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.832126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.832263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.832297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.832454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.832487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.832640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.832674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.832833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.832875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.833057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.833219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.833412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.833634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.833801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.833985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.834158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.834323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.834713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.834920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.834955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.835116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.835149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.835323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.835357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.835513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.835548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.835685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.835720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.835872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.835916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.836945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.836979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.837138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.837171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.837303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.837336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.837467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.837500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.837657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.837690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.837824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.837857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.466 [2024-07-13 05:26:07.838963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.466 [2024-07-13 05:26:07.838997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.466 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.839125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.839170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.839331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.839365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.839524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.839559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.839718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.839752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.839904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.839945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.840090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.840299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.840334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.840470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.840503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.840636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.840670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.840849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.840888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.841056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.841235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.841407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.841634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.841810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.841990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.842157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.842319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.842508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.842678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.842849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.842888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.843048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.843082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.843275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.843308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.843473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.843507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.843655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.843689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.843850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.843890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.467 [2024-07-13 05:26:07.844052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.844086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.844225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.844262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:01.467 [2024-07-13 05:26:07.844417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.844452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.844597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.844632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.844802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.844837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.467 [2024-07-13 05:26:07.844990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.845162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.845379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.845579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.845755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.845940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.845973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.846133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.846166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.467 [2024-07-13 05:26:07.846306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.467 [2024-07-13 05:26:07.846339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.467 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.846498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.846706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.846741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.846896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.846940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.847107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.847301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.847464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.847667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.847837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.847984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.848151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.848328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.848527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.848707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.848907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.848950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.849088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.849121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.849276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.849310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.849504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.849537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.849703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.849737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.849900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.849937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.850145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.850179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.850319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.850354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.850520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.850554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.850699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.850733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.850881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.850916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.851082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.851120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.851256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.851301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.851473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.851507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.851639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.851673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.851833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.851873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.852918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.468 [2024-07-13 05:26:07.852952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.468 qpair failed and we were unable to recover it. 00:37:01.468 [2024-07-13 05:26:07.853091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.853131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.853292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.853327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.853488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.853531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.853672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.853706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.853854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.853893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.854070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.854104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.854250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.854427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.854461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.854641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.854676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.854809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.854843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.855937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.855971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.856141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.856174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.856311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.856346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.856522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.856556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.856696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.856730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.856900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.856943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.857077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.857111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.857302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.857336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.857465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.857499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.857661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.857695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.857853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.857894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.858077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.858276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.858455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.858645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.858837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.858998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.859181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.859374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.859560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.859756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.859942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.859977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.860141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.860175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.860323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.860357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.860492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.860526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.469 [2024-07-13 05:26:07.860686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.469 [2024-07-13 05:26:07.860720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.469 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.860885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.860926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.861082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.861116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.861268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.861313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.861504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.861538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.861673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.861708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.861842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.861882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.862054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.862088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.862265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.862299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.862462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.862496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.862632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.862666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.862801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.862835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.863937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.863971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.864120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.864154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.864316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.864350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.864522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.864555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.864719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.864752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.864917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.864952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.865089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.865123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.865272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.865306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.865473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.865508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.865639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.865673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.865839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.865897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.866050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.866084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.866247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.866285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.866438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.866472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.866616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.866651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.866801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.866836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:01.470 qpair failed and we were unable to recover it. 00:37:01.470 [2024-07-13 05:26:07.867095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.470 [2024-07-13 05:26:07.867163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:01.470 [2024-07-13 05:26:07.867193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:37:01.470 [2024-07-13 05:26:07.867239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:37:01.470 [2024-07-13 05:26:07.867271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.470 [2024-07-13 05:26:07.867299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.470 [2024-07-13 05:26:07.867327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.470 Unable to reset the controller. 00:37:01.731 Malloc0 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.732 [2024-07-13 05:26:07.926055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.732 [2024-07-13 05:26:07.955603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.732 05:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 869250 00:37:02.670 Controller properly reset. 00:37:06.858 Initializing NVMe Controllers 00:37:06.858 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:06.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:06.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:06.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:06.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:06.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:06.858 Initialization complete. Launching workers. 00:37:06.858 Starting thread on core 1 00:37:06.858 Starting thread on core 2 00:37:06.858 Starting thread on core 3 00:37:06.858 Starting thread on core 0 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:06.858 00:37:06.858 real 0m11.489s 00:37:06.858 user 0m33.402s 00:37:06.858 sys 0m7.627s 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:06.858 ************************************ 00:37:06.858 END TEST nvmf_target_disconnect_tc2 00:37:06.858 ************************************ 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:06.858 rmmod nvme_tcp 00:37:06.858 rmmod nvme_fabrics 00:37:06.858 rmmod nvme_keyring 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 869704 ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 869704 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 869704 ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 869704 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 869704 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 869704' 00:37:06.858 killing process with pid 869704 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 869704 00:37:06.858 05:26:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 869704 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:08.232 05:26:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.135 05:26:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:10.135 00:37:10.135 real 0m17.482s 00:37:10.135 user 1m1.260s 00:37:10.135 sys 0m10.175s 00:37:10.135 05:26:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:10.135 05:26:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:10.135 ************************************ 00:37:10.135 END TEST nvmf_target_disconnect 00:37:10.135 ************************************ 00:37:10.135 05:26:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:10.135 05:26:16 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:10.135 05:26:16 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:10.135 05:26:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.393 05:26:16 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:10.393 00:37:10.393 real 28m57.834s 00:37:10.393 user 78m5.483s 00:37:10.393 sys 6m5.686s 00:37:10.393 05:26:16 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:10.393 05:26:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.394 ************************************ 00:37:10.394 END TEST nvmf_tcp 00:37:10.394 ************************************ 00:37:10.394 05:26:16 -- common/autotest_common.sh@1142 -- # return 0 00:37:10.394 05:26:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:10.394 05:26:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:10.394 05:26:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:10.394 05:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.394 05:26:16 -- common/autotest_common.sh@10 -- # set +x 00:37:10.394 ************************************ 00:37:10.394 START TEST spdkcli_nvmf_tcp 00:37:10.394 ************************************ 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:10.394 * Looking for test storage... 00:37:10.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=870986 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 870986 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 870986 ']' 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:10.394 05:26:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.394 [2024-07-13 05:26:16.828611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:10.394 [2024-07-13 05:26:16.828767] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870986 ] 00:37:10.654 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.654 [2024-07-13 05:26:16.968045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:10.915 [2024-07-13 05:26:17.232444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.915 [2024-07-13 05:26:17.232448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.480 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.481 05:26:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:11.481 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:11.481 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:11.481 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:11.481 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:11.481 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:11.481 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:11.481 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.481 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.481 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:11.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:11.481 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:11.481 ' 00:37:14.762 [2024-07-13 05:26:20.562552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.327 [2024-07-13 05:26:21.803832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:17.860 [2024-07-13 05:26:24.091157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:19.768 [2024-07-13 05:26:26.053627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:21.178 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:21.178 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:21.178 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:21.178 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:21.178 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:21.178 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:21.179 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:21.179 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:21.179 05:26:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:21.179 05:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:21.179 05:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.437 05:26:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:21.437 05:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:21.437 05:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.437 05:26:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:21.437 05:26:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.695 05:26:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:21.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:21.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:21.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:21.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:21.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:21.695 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:21.695 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:21.695 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:21.695 ' 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:28.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:28.261 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:28.261 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:28.261 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 870986 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 870986 ']' 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 870986 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870986 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870986' 00:37:28.261 killing process with pid 870986 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 870986 00:37:28.261 05:26:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 870986 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 870986 ']' 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 870986 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 870986 ']' 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 870986 00:37:28.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (870986) - No such process 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 870986 is not found' 00:37:28.827 Process with pid 870986 is not found 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:28.827 00:37:28.827 real 0m18.466s 00:37:28.827 user 0m38.105s 00:37:28.827 sys 0m1.069s 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.827 05:26:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.827 ************************************ 00:37:28.827 END TEST spdkcli_nvmf_tcp 00:37:28.827 ************************************ 00:37:28.827 05:26:35 -- common/autotest_common.sh@1142 -- # return 0 00:37:28.827 05:26:35 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:28.827 05:26:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:28.827 05:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.827 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:37:28.827 ************************************ 00:37:28.827 START TEST nvmf_identify_passthru 00:37:28.827 ************************************ 00:37:28.827 05:26:35 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:28.827 * Looking for test storage... 00:37:28.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:28.827 05:26:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.827 05:26:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.827 05:26:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.827 05:26:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.827 05:26:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 05:26:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 05:26:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 05:26:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:28.827 05:26:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:28.827 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:28.827 05:26:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.828 05:26:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.828 05:26:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.828 05:26:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.828 05:26:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.828 05:26:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.828 05:26:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.828 05:26:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:28.828 05:26:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.828 05:26:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.828 05:26:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:28.828 05:26:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:28.828 05:26:35 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:28.828 05:26:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:30.732 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:30.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:30.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:30.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:30.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:30.733 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:30.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:37:30.992 00:37:30.992 --- 10.0.0.2 ping statistics --- 00:37:30.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.992 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:37:30.992 00:37:30.992 --- 10.0.0.1 ping statistics --- 00:37:30.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.992 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:30.992 05:26:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:37:30.992 05:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:30.992 05:26:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:31.252 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.434 05:26:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:37:35.434 05:26:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:35.434 05:26:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:35.434 05:26:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:35.434 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=875868 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:40.729 05:26:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 875868 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 875868 ']' 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:40.729 05:26:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:40.729 [2024-07-13 05:26:46.264254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:40.729 [2024-07-13 05:26:46.264400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.729 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.729 [2024-07-13 05:26:46.401829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:40.729 [2024-07-13 05:26:46.662848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.729 [2024-07-13 05:26:46.662939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.729 [2024-07-13 05:26:46.662970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.729 [2024-07-13 05:26:46.662989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.729 [2024-07-13 05:26:46.663011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.729 [2024-07-13 05:26:46.663104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.729 [2024-07-13 05:26:46.663159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.729 [2024-07-13 05:26:46.663198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.729 [2024-07-13 05:26:46.663208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:37:40.729 05:26:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:40.729 INFO: Log level set to 20 00:37:40.729 INFO: Requests: 00:37:40.729 { 00:37:40.729 "jsonrpc": "2.0", 00:37:40.729 "method": "nvmf_set_config", 00:37:40.729 "id": 1, 00:37:40.729 "params": { 00:37:40.729 "admin_cmd_passthru": { 00:37:40.729 "identify_ctrlr": true 00:37:40.729 } 00:37:40.729 } 00:37:40.729 } 00:37:40.729 00:37:40.729 INFO: response: 00:37:40.729 { 00:37:40.729 "jsonrpc": "2.0", 00:37:40.729 "id": 1, 00:37:40.729 "result": true 00:37:40.729 } 00:37:40.729 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.729 05:26:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.729 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:40.729 INFO: Setting log level to 20 00:37:40.729 INFO: Setting log level to 20 00:37:40.729 INFO: Log level set to 20 00:37:40.729 INFO: Log level set to 20 00:37:40.729 INFO: Requests: 00:37:40.729 { 00:37:40.729 "jsonrpc": "2.0", 00:37:40.729 "method": "framework_start_init", 00:37:40.729 "id": 1 00:37:40.729 } 00:37:40.729 00:37:40.729 INFO: Requests: 00:37:40.729 { 00:37:40.729 "jsonrpc": "2.0", 00:37:40.729 "method": "framework_start_init", 00:37:40.729 "id": 1 00:37:40.729 } 00:37:40.729 00:37:41.296 [2024-07-13 05:26:47.503486] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:41.296 INFO: response: 00:37:41.296 { 00:37:41.296 "jsonrpc": "2.0", 00:37:41.296 "id": 1, 00:37:41.296 "result": true 00:37:41.296 } 00:37:41.296 00:37:41.296 INFO: response: 00:37:41.296 { 00:37:41.296 "jsonrpc": "2.0", 00:37:41.296 "id": 1, 00:37:41.296 "result": true 00:37:41.296 } 00:37:41.296 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.296 05:26:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:41.296 INFO: Setting log level to 40 00:37:41.296 INFO: Setting log level to 40 00:37:41.296 INFO: Setting log level to 40 00:37:41.296 [2024-07-13 05:26:47.516311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.296 05:26:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:41.296 05:26:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.296 05:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.582 Nvme0n1 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.582 [2024-07-13 05:26:50.464459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.582 [ 00:37:44.582 { 00:37:44.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:44.582 "subtype": "Discovery", 00:37:44.582 "listen_addresses": [], 00:37:44.582 "allow_any_host": true, 00:37:44.582 "hosts": [] 00:37:44.582 }, 00:37:44.582 { 00:37:44.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:44.582 "subtype": "NVMe", 00:37:44.582 "listen_addresses": [ 00:37:44.582 { 00:37:44.582 "trtype": "TCP", 00:37:44.582 "adrfam": "IPv4", 00:37:44.582 "traddr": "10.0.0.2", 00:37:44.582 "trsvcid": "4420" 00:37:44.582 } 00:37:44.582 ], 00:37:44.582 "allow_any_host": true, 00:37:44.582 "hosts": [], 00:37:44.582 "serial_number": "SPDK00000000000001", 00:37:44.582 "model_number": "SPDK bdev Controller", 00:37:44.582 "max_namespaces": 1, 00:37:44.582 "min_cntlid": 1, 00:37:44.582 "max_cntlid": 65519, 00:37:44.582 "namespaces": [ 00:37:44.582 { 00:37:44.582 "nsid": 1, 00:37:44.582 "bdev_name": "Nvme0n1", 00:37:44.582 "name": "Nvme0n1", 00:37:44.582 "nguid": "B3FA8FA81FD646888864D6BDD2F1E91E", 00:37:44.582 "uuid": "b3fa8fa8-1fd6-4688-8864-d6bdd2f1e91e" 00:37:44.582 } 00:37:44.582 ] 00:37:44.582 } 00:37:44.582 ] 00:37:44.582 05:26:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:44.582 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:44.582 05:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:44.582 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:44.840 05:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:44.840 rmmod nvme_tcp 00:37:44.840 rmmod nvme_fabrics 00:37:44.840 rmmod nvme_keyring 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 875868 ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 875868 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 875868 ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 875868 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:44.840 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 875868 00:37:45.098 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:45.098 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:45.098 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 875868' 00:37:45.098 killing process with pid 875868 00:37:45.098 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 875868 00:37:45.098 05:26:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 875868 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:47.639 05:26:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.639 05:26:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.639 05:26:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.543 05:26:55 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:49.543 00:37:49.543 real 0m20.769s 00:37:49.543 user 0m34.264s 00:37:49.543 sys 0m2.744s 00:37:49.543 05:26:55 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:49.543 05:26:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:49.543 ************************************ 00:37:49.543 END TEST nvmf_identify_passthru 00:37:49.543 ************************************ 00:37:49.543 05:26:55 -- common/autotest_common.sh@1142 -- # return 0 00:37:49.543 05:26:55 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:49.543 05:26:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:49.543 05:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:49.543 05:26:55 -- common/autotest_common.sh@10 -- # set +x 00:37:49.543 ************************************ 00:37:49.543 START TEST nvmf_dif 00:37:49.543 ************************************ 00:37:49.543 05:26:56 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:49.802 * Looking for test storage... 00:37:49.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.802 05:26:56 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.802 05:26:56 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.802 05:26:56 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.802 05:26:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.802 05:26:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.802 05:26:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.802 05:26:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:49.802 05:26:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:49.802 05:26:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.802 05:26:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:49.802 05:26:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:49.802 05:26:56 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:37:49.802 05:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:51.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:51.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:51.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:51.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:51.709 05:26:57 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:51.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:51.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:37:51.709 00:37:51.709 --- 10.0.0.2 ping statistics --- 00:37:51.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.709 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:51.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:51.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:37:51.709 00:37:51.709 --- 10.0.0.1 ping statistics --- 00:37:51.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.709 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:51.709 05:26:58 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:52.645 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:52.645 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:52.645 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:52.645 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:52.645 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:52.645 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:52.645 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:52.645 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:52.645 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:52.645 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:52.645 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:52.645 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:52.645 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:52.645 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:52.645 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:52.645 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:52.645 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:52.903 05:26:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:52.903 05:26:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=879285 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:52.903 05:26:59 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 879285 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 879285 ']' 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:52.903 05:26:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:52.903 [2024-07-13 05:26:59.285431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:52.903 [2024-07-13 05:26:59.285577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.903 EAL: No free 2048 kB hugepages reported on node 1 00:37:53.161 [2024-07-13 05:26:59.416417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.161 [2024-07-13 05:26:59.660483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.161 [2024-07-13 05:26:59.660575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.161 [2024-07-13 05:26:59.660604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.420 [2024-07-13 05:26:59.660628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.420 [2024-07-13 05:26:59.660657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.420 [2024-07-13 05:26:59.660714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:37:53.988 05:27:00 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 05:27:00 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.988 05:27:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:53.988 05:27:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 [2024-07-13 05:27:00.242335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.988 05:27:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 ************************************ 00:37:53.988 START TEST fio_dif_1_default 00:37:53.988 ************************************ 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 bdev_null0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.988 [2024-07-13 05:27:00.298598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.988 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:53.988 { 00:37:53.988 "params": { 00:37:53.988 "name": "Nvme$subsystem", 00:37:53.988 "trtype": "$TEST_TRANSPORT", 00:37:53.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:53.988 "adrfam": "ipv4", 00:37:53.988 "trsvcid": "$NVMF_PORT", 00:37:53.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:53.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:53.989 "hdgst": ${hdgst:-false}, 00:37:53.989 "ddgst": ${ddgst:-false} 00:37:53.989 }, 00:37:53.989 "method": "bdev_nvme_attach_controller" 00:37:53.989 } 00:37:53.989 EOF 00:37:53.989 )") 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:53.989 "params": { 00:37:53.989 "name": "Nvme0", 00:37:53.989 "trtype": "tcp", 00:37:53.989 "traddr": "10.0.0.2", 00:37:53.989 "adrfam": "ipv4", 00:37:53.989 "trsvcid": "4420", 00:37:53.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:53.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:53.989 "hdgst": false, 00:37:53.989 "ddgst": false 00:37:53.989 }, 00:37:53.989 "method": "bdev_nvme_attach_controller" 00:37:53.989 }' 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:53.989 05:27:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.248 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:54.248 fio-3.35 00:37:54.248 Starting 1 thread 00:37:54.248 EAL: No free 2048 kB hugepages reported on node 1 00:38:06.447 00:38:06.447 filename0: (groupid=0, jobs=1): err= 0: pid=879708: Sat Jul 13 05:27:11 2024 00:38:06.447 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10018msec) 00:38:06.447 slat (nsec): min=5447, max=78991, avg=15163.14, stdev=5760.98 00:38:06.447 clat (usec): min=40898, max=45192, avg=41866.29, stdev=377.95 00:38:06.447 lat (usec): min=40910, max=45213, avg=41881.45, stdev=378.03 00:38:06.447 clat percentiles (usec): 00:38:06.447 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:38:06.447 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:06.447 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:06.447 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:38:06.447 | 99.99th=[45351] 00:38:06.447 bw ( KiB/s): min= 352, max= 384, per=99.55%, avg=380.80, stdev= 9.85, samples=20 00:38:06.447 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:38:06.447 lat (msec) : 50=100.00% 00:38:06.447 cpu : usr=91.45%, sys=8.03%, ctx=14, majf=0, minf=1636 00:38:06.447 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.447 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.447 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:06.447 00:38:06.447 Run status group 0 (all jobs): 00:38:06.447 READ: bw=382KiB/s (391kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10018-10018msec 00:38:06.447 ----------------------------------------------------- 00:38:06.447 Suppressions used: 00:38:06.447 count bytes template 00:38:06.447 1 8 /usr/src/fio/parse.c 00:38:06.447 1 8 libtcmalloc_minimal.so 00:38:06.447 1 904 libcrypto.so 00:38:06.447 ----------------------------------------------------- 00:38:06.447 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 00:38:06.447 real 0m12.282s 00:38:06.447 user 0m11.312s 00:38:06.447 sys 0m1.234s 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 ************************************ 00:38:06.447 END TEST fio_dif_1_default 00:38:06.447 ************************************ 00:38:06.447 05:27:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:06.447 05:27:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:06.447 05:27:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:06.447 05:27:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 ************************************ 00:38:06.447 START TEST fio_dif_1_multi_subsystems 00:38:06.447 ************************************ 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 bdev_null0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 [2024-07-13 05:27:12.632727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 bdev_null1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.447 { 00:38:06.447 "params": { 00:38:06.447 "name": "Nvme$subsystem", 00:38:06.447 "trtype": "$TEST_TRANSPORT", 00:38:06.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.447 "adrfam": "ipv4", 00:38:06.447 "trsvcid": "$NVMF_PORT", 00:38:06.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.447 "hdgst": ${hdgst:-false}, 00:38:06.447 "ddgst": ${ddgst:-false} 00:38:06.447 }, 00:38:06.447 "method": "bdev_nvme_attach_controller" 00:38:06.447 } 00:38:06.447 EOF 00:38:06.447 )") 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:06.447 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.448 { 00:38:06.448 "params": { 00:38:06.448 "name": "Nvme$subsystem", 00:38:06.448 "trtype": "$TEST_TRANSPORT", 00:38:06.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.448 "adrfam": "ipv4", 00:38:06.448 "trsvcid": "$NVMF_PORT", 00:38:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.448 "hdgst": ${hdgst:-false}, 00:38:06.448 "ddgst": ${ddgst:-false} 00:38:06.448 }, 00:38:06.448 "method": "bdev_nvme_attach_controller" 00:38:06.448 } 00:38:06.448 EOF 00:38:06.448 )") 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:06.448 "params": { 00:38:06.448 "name": "Nvme0", 00:38:06.448 "trtype": "tcp", 00:38:06.448 "traddr": "10.0.0.2", 00:38:06.448 "adrfam": "ipv4", 00:38:06.448 "trsvcid": "4420", 00:38:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:06.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:06.448 "hdgst": false, 00:38:06.448 "ddgst": false 00:38:06.448 }, 00:38:06.448 "method": "bdev_nvme_attach_controller" 00:38:06.448 },{ 00:38:06.448 "params": { 00:38:06.448 "name": "Nvme1", 00:38:06.448 "trtype": "tcp", 00:38:06.448 "traddr": "10.0.0.2", 00:38:06.448 "adrfam": "ipv4", 00:38:06.448 "trsvcid": "4420", 00:38:06.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.448 "hdgst": false, 00:38:06.448 "ddgst": false 00:38:06.448 }, 00:38:06.448 "method": "bdev_nvme_attach_controller" 00:38:06.448 }' 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:06.448 05:27:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.709 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:06.709 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:06.709 fio-3.35 00:38:06.709 Starting 2 threads 00:38:06.709 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.911 00:38:18.911 filename0: (groupid=0, jobs=1): err= 0: pid=881757: Sat Jul 13 05:27:23 2024 00:38:18.911 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10009msec) 00:38:18.911 slat (nsec): min=6165, max=42319, avg=15168.13, stdev=5274.70 00:38:18.911 clat (usec): min=836, max=45135, avg=21475.83, stdev=20120.00 00:38:18.911 lat (usec): min=848, max=45152, avg=21491.00, stdev=20118.74 00:38:18.911 clat percentiles (usec): 00:38:18.911 | 1.00th=[ 865], 5.00th=[ 955], 10.00th=[ 1254], 20.00th=[ 1369], 00:38:18.912 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[41157], 60.00th=[41681], 00:38:18.912 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:38:18.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:38:18.912 | 99.99th=[45351] 00:38:18.912 bw ( KiB/s): min= 704, max= 768, per=50.02%, avg=742.45, stdev=30.38, samples=20 00:38:18.912 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:38:18.912 lat (usec) : 1000=5.38% 00:38:18.912 lat (msec) : 2=44.52%, 50=50.11% 00:38:18.912 cpu : usr=93.89%, sys=5.55%, ctx=29, majf=0, minf=1636 00:38:18.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.912 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:18.912 filename1: (groupid=0, jobs=1): err= 0: pid=881758: Sat Jul 13 05:27:23 2024 00:38:18.912 read: IOPS=185, BW=740KiB/s (758kB/s)(7408KiB/10008msec) 00:38:18.912 slat (usec): min=5, max=150, avg=15.27, stdev= 6.31 00:38:18.912 clat (usec): min=880, max=45545, avg=21568.97, stdev=20491.52 00:38:18.912 lat (usec): min=895, max=45573, avg=21584.24, stdev=20490.54 00:38:18.912 clat percentiles (usec): 00:38:18.912 | 1.00th=[ 898], 5.00th=[ 914], 10.00th=[ 922], 20.00th=[ 938], 00:38:18.912 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41681], 00:38:18.912 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:18.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:38:18.912 | 99.99th=[45351] 00:38:18.912 bw ( KiB/s): min= 640, max= 768, per=49.82%, avg=739.20, stdev=38.71, samples=20 00:38:18.912 iops : min= 160, max= 192, avg=184.80, stdev= 9.68, samples=20 00:38:18.912 lat (usec) : 1000=45.25% 00:38:18.912 lat (msec) : 2=4.43%, 50=50.32% 00:38:18.912 cpu : usr=93.47%, sys=6.03%, ctx=14, majf=0, minf=1637 00:38:18.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.912 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:18.912 00:38:18.912 Run status group 0 (all jobs): 00:38:18.912 READ: bw=1483KiB/s (1519kB/s), 740KiB/s-743KiB/s (758kB/s-761kB/s), io=14.5MiB (15.2MB), run=10008-10009msec 00:38:18.912 ----------------------------------------------------- 00:38:18.912 Suppressions used: 00:38:18.912 count bytes template 00:38:18.912 2 16 /usr/src/fio/parse.c 00:38:18.912 1 8 libtcmalloc_minimal.so 00:38:18.912 1 904 libcrypto.so 00:38:18.912 ----------------------------------------------------- 00:38:18.912 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 00:38:18.912 real 0m12.287s 00:38:18.912 user 0m20.896s 00:38:18.912 sys 0m1.577s 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 ************************************ 00:38:18.912 END TEST fio_dif_1_multi_subsystems 00:38:18.912 ************************************ 00:38:18.912 05:27:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:18.912 05:27:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:18.912 05:27:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:18.912 05:27:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 ************************************ 00:38:18.912 START TEST fio_dif_rand_params 00:38:18.912 ************************************ 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 bdev_null0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.912 [2024-07-13 05:27:24.963334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:18.912 { 00:38:18.912 "params": { 00:38:18.912 "name": "Nvme$subsystem", 00:38:18.912 "trtype": "$TEST_TRANSPORT", 00:38:18.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:18.912 "adrfam": "ipv4", 00:38:18.912 "trsvcid": "$NVMF_PORT", 00:38:18.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:18.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:18.912 "hdgst": ${hdgst:-false}, 00:38:18.912 "ddgst": ${ddgst:-false} 00:38:18.912 }, 00:38:18.912 "method": "bdev_nvme_attach_controller" 00:38:18.912 } 00:38:18.912 EOF 00:38:18.912 )") 00:38:18.912 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:18.913 "params": { 00:38:18.913 "name": "Nvme0", 00:38:18.913 "trtype": "tcp", 00:38:18.913 "traddr": "10.0.0.2", 00:38:18.913 "adrfam": "ipv4", 00:38:18.913 "trsvcid": "4420", 00:38:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:18.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:18.913 "hdgst": false, 00:38:18.913 "ddgst": false 00:38:18.913 }, 00:38:18.913 "method": "bdev_nvme_attach_controller" 00:38:18.913 }' 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:18.913 05:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.913 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:18.913 ... 00:38:18.913 fio-3.35 00:38:18.913 Starting 3 threads 00:38:18.913 EAL: No free 2048 kB hugepages reported on node 1 00:38:25.467 00:38:25.467 filename0: (groupid=0, jobs=1): err= 0: pid=883275: Sat Jul 13 05:27:31 2024 00:38:25.467 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(107MiB/5048msec) 00:38:25.467 slat (nsec): min=6295, max=52326, avg=25139.43, stdev=6282.68 00:38:25.467 clat (usec): min=6571, max=61761, avg=17628.11, stdev=11988.38 00:38:25.467 lat (usec): min=6592, max=61788, avg=17653.24, stdev=11987.94 00:38:25.467 clat percentiles (usec): 00:38:25.467 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11207], 00:38:25.467 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14615], 60.00th=[15401], 00:38:25.467 | 70.00th=[16319], 80.00th=[17433], 90.00th=[21103], 95.00th=[53740], 00:38:25.467 | 99.00th=[57410], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:38:25.467 | 99.99th=[61604] 00:38:25.467 bw ( KiB/s): min=15616, max=30976, per=34.31%, avg=21811.20, stdev=4534.40, samples=10 00:38:25.467 iops : min= 122, max= 242, avg=170.40, stdev=35.43, samples=10 00:38:25.467 lat (msec) : 10=8.77%, 20=80.12%, 50=2.57%, 100=8.54% 00:38:25.467 cpu : usr=93.90%, sys=5.55%, ctx=13, majf=0, minf=1639 00:38:25.467 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 issued rwts: total=855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.467 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:25.467 filename0: (groupid=0, jobs=1): err= 0: pid=883276: Sat Jul 13 05:27:31 2024 00:38:25.467 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(105MiB/5046msec) 00:38:25.467 slat (nsec): min=6583, max=46753, avg=20161.80, stdev=5068.64 00:38:25.467 clat (usec): min=7166, max=61121, avg=17923.34, stdev=11631.12 00:38:25.467 lat (usec): min=7182, max=61139, avg=17943.50, stdev=11631.19 00:38:25.467 clat percentiles (usec): 00:38:25.467 | 1.00th=[ 7504], 5.00th=[ 7963], 10.00th=[10159], 20.00th=[11731], 00:38:25.467 | 30.00th=[12780], 40.00th=[14222], 50.00th=[15401], 60.00th=[16319], 00:38:25.467 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20841], 95.00th=[53740], 00:38:25.467 | 99.00th=[58459], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:38:25.467 | 99.99th=[61080] 00:38:25.467 bw ( KiB/s): min=16896, max=27648, per=33.79%, avg=21478.40, stdev=3312.54, samples=10 00:38:25.467 iops : min= 132, max= 216, avg=167.80, stdev=25.88, samples=10 00:38:25.467 lat (msec) : 10=9.04%, 20=78.72%, 50=4.40%, 100=7.85% 00:38:25.467 cpu : usr=94.39%, sys=5.09%, ctx=8, majf=0, minf=1634 00:38:25.467 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 issued rwts: total=841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.467 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:25.467 filename0: (groupid=0, jobs=1): err= 0: pid=883277: Sat Jul 13 05:27:31 2024 00:38:25.467 read: IOPS=162, BW=20.2MiB/s (21.2MB/s)(101MiB/5006msec) 00:38:25.467 slat (nsec): min=6234, max=47278, avg=19628.42, stdev=5860.13 00:38:25.467 clat (usec): min=6032, max=95285, avg=18487.01, stdev=12886.60 00:38:25.467 lat (usec): min=6051, max=95307, avg=18506.64, stdev=12886.56 00:38:25.467 clat percentiles (usec): 00:38:25.467 | 1.00th=[ 6521], 5.00th=[ 7373], 10.00th=[10159], 20.00th=[11338], 00:38:25.467 | 30.00th=[12649], 40.00th=[14353], 50.00th=[15533], 60.00th=[16319], 00:38:25.467 | 70.00th=[17433], 80.00th=[19268], 90.00th=[23462], 95.00th=[54264], 00:38:25.467 | 99.00th=[59507], 99.50th=[61080], 99.90th=[94897], 99.95th=[94897], 00:38:25.467 | 99.99th=[94897] 00:38:25.467 bw ( KiB/s): min=16128, max=24320, per=32.54%, avg=20688.90, stdev=2391.90, samples=10 00:38:25.467 iops : min= 126, max= 190, avg=161.60, stdev=18.69, samples=10 00:38:25.467 lat (msec) : 10=9.37%, 20=73.49%, 50=8.38%, 100=8.75% 00:38:25.467 cpu : usr=94.55%, sys=4.92%, ctx=9, majf=0, minf=1636 00:38:25.467 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.467 issued rwts: total=811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.467 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:25.467 00:38:25.467 Run status group 0 (all jobs): 00:38:25.467 READ: bw=62.1MiB/s (65.1MB/s), 20.2MiB/s-21.2MiB/s (21.2MB/s-22.2MB/s), io=313MiB (329MB), run=5006-5048msec 00:38:26.035 ----------------------------------------------------- 00:38:26.035 Suppressions used: 00:38:26.035 count bytes template 00:38:26.035 5 44 /usr/src/fio/parse.c 00:38:26.035 1 8 libtcmalloc_minimal.so 00:38:26.035 1 904 libcrypto.so 00:38:26.035 ----------------------------------------------------- 00:38:26.035 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 bdev_null0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 [2024-07-13 05:27:32.310907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 bdev_null1 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 bdev_null2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:26.035 { 00:38:26.035 "params": { 00:38:26.035 "name": "Nvme$subsystem", 00:38:26.035 "trtype": "$TEST_TRANSPORT", 00:38:26.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.035 "adrfam": "ipv4", 00:38:26.035 "trsvcid": "$NVMF_PORT", 00:38:26.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.035 "hdgst": ${hdgst:-false}, 00:38:26.035 "ddgst": ${ddgst:-false} 00:38:26.035 }, 00:38:26.035 "method": "bdev_nvme_attach_controller" 00:38:26.035 } 00:38:26.035 EOF 00:38:26.035 )") 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:26.035 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:26.036 { 00:38:26.036 "params": { 00:38:26.036 "name": "Nvme$subsystem", 00:38:26.036 "trtype": "$TEST_TRANSPORT", 00:38:26.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.036 "adrfam": "ipv4", 00:38:26.036 "trsvcid": "$NVMF_PORT", 00:38:26.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.036 "hdgst": ${hdgst:-false}, 00:38:26.036 "ddgst": ${ddgst:-false} 00:38:26.036 }, 00:38:26.036 "method": "bdev_nvme_attach_controller" 00:38:26.036 } 00:38:26.036 EOF 00:38:26.036 )") 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:26.036 { 00:38:26.036 "params": { 00:38:26.036 "name": "Nvme$subsystem", 00:38:26.036 "trtype": "$TEST_TRANSPORT", 00:38:26.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.036 "adrfam": "ipv4", 00:38:26.036 "trsvcid": "$NVMF_PORT", 00:38:26.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.036 "hdgst": ${hdgst:-false}, 00:38:26.036 "ddgst": ${ddgst:-false} 00:38:26.036 }, 00:38:26.036 "method": "bdev_nvme_attach_controller" 00:38:26.036 } 00:38:26.036 EOF 00:38:26.036 )") 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:26.036 "params": { 00:38:26.036 "name": "Nvme0", 00:38:26.036 "trtype": "tcp", 00:38:26.036 "traddr": "10.0.0.2", 00:38:26.036 "adrfam": "ipv4", 00:38:26.036 "trsvcid": "4420", 00:38:26.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.036 "hdgst": false, 00:38:26.036 "ddgst": false 00:38:26.036 }, 00:38:26.036 "method": "bdev_nvme_attach_controller" 00:38:26.036 },{ 00:38:26.036 "params": { 00:38:26.036 "name": "Nvme1", 00:38:26.036 "trtype": "tcp", 00:38:26.036 "traddr": "10.0.0.2", 00:38:26.036 "adrfam": "ipv4", 00:38:26.036 "trsvcid": "4420", 00:38:26.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:26.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:26.036 "hdgst": false, 00:38:26.036 "ddgst": false 00:38:26.036 }, 00:38:26.036 "method": "bdev_nvme_attach_controller" 00:38:26.036 },{ 00:38:26.036 "params": { 00:38:26.036 "name": "Nvme2", 00:38:26.036 "trtype": "tcp", 00:38:26.036 "traddr": "10.0.0.2", 00:38:26.036 "adrfam": "ipv4", 00:38:26.036 "trsvcid": "4420", 00:38:26.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:26.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:26.036 "hdgst": false, 00:38:26.036 "ddgst": false 00:38:26.036 }, 00:38:26.036 "method": "bdev_nvme_attach_controller" 00:38:26.036 }' 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:26.036 05:27:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.295 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:26.295 ... 00:38:26.295 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:26.295 ... 00:38:26.295 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:26.295 ... 00:38:26.295 fio-3.35 00:38:26.295 Starting 24 threads 00:38:26.295 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.508 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884261: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=206, BW=828KiB/s (848kB/s)(8288KiB/10013msec) 00:38:38.508 slat (usec): min=13, max=115, avg=59.54, stdev=11.43 00:38:38.508 clat (msec): min=19, max=347, avg=77.01, stdev=63.17 00:38:38.508 lat (msec): min=19, max=347, avg=77.07, stdev=63.17 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.508 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.508 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.508 | 99.00th=[ 232], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:38:38.508 | 99.99th=[ 347] 00:38:38.508 bw ( KiB/s): min= 240, max= 1504, per=3.92%, avg=791.58, stdev=552.23, samples=19 00:38:38.508 iops : min= 60, max= 376, avg=197.89, stdev=138.06, samples=19 00:38:38.508 lat (msec) : 20=0.19%, 50=74.52%, 100=2.51%, 250=22.01%, 500=0.77% 00:38:38.508 cpu : usr=97.65%, sys=1.58%, ctx=72, majf=0, minf=1635 00:38:38.508 IO depths : 1=0.2%, 2=1.8%, 4=6.8%, 8=74.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:38:38.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 complete : 0=0.0%, 4=90.5%, 8=7.7%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884262: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=205, BW=823KiB/s (843kB/s)(8256KiB/10029msec) 00:38:38.508 slat (usec): min=6, max=107, avg=34.77, stdev=10.40 00:38:38.508 clat (msec): min=27, max=275, avg=77.43, stdev=58.58 00:38:38.508 lat (msec): min=27, max=275, avg=77.47, stdev=58.59 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.508 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.508 | 70.00th=[ 46], 80.00th=[ 146], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.508 | 99.00th=[ 222], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 275], 00:38:38.508 | 99.99th=[ 275] 00:38:38.508 bw ( KiB/s): min= 256, max= 1536, per=3.90%, avg=788.21, stdev=526.13, samples=19 00:38:38.508 iops : min= 64, max= 384, avg=197.05, stdev=131.53, samples=19 00:38:38.508 lat (msec) : 50=74.22%, 100=0.97%, 250=24.22%, 500=0.58% 00:38:38.508 cpu : usr=97.71%, sys=1.65%, ctx=55, majf=0, minf=1634 00:38:38.508 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:38.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884263: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=204, BW=818KiB/s (838kB/s)(8192KiB/10014msec) 00:38:38.508 slat (usec): min=13, max=102, avg=33.33, stdev=12.36 00:38:38.508 clat (msec): min=30, max=278, avg=77.93, stdev=60.86 00:38:38.508 lat (msec): min=30, max=278, avg=77.96, stdev=60.86 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.508 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.508 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 184], 95.00th=[ 203], 00:38:38.508 | 99.00th=[ 255], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:38:38.508 | 99.99th=[ 279] 00:38:38.508 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=530.83, samples=19 00:38:38.508 iops : min= 64, max= 384, avg=195.37, stdev=132.71, samples=19 00:38:38.508 lat (msec) : 50=75.68%, 100=0.10%, 250=23.05%, 500=1.17% 00:38:38.508 cpu : usr=97.81%, sys=1.65%, ctx=27, majf=0, minf=1635 00:38:38.508 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:38.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884264: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=204, BW=818KiB/s (838kB/s)(8192KiB/10016msec) 00:38:38.508 slat (usec): min=8, max=113, avg=33.20, stdev= 7.56 00:38:38.508 clat (msec): min=40, max=260, avg=77.94, stdev=58.97 00:38:38.508 lat (msec): min=40, max=260, avg=77.97, stdev=58.97 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.508 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.508 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 192], 95.00th=[ 199], 00:38:38.508 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 262], 00:38:38.508 | 99.99th=[ 262] 00:38:38.508 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=531.04, samples=19 00:38:38.508 iops : min= 64, max= 384, avg=195.37, stdev=132.76, samples=19 00:38:38.508 lat (msec) : 50=75.00%, 100=0.10%, 250=24.80%, 500=0.10% 00:38:38.508 cpu : usr=93.46%, sys=3.46%, ctx=438, majf=0, minf=1633 00:38:38.508 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:38.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884265: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=204, BW=818KiB/s (838kB/s)(8192KiB/10016msec) 00:38:38.508 slat (nsec): min=11766, max=68749, avg=31386.99, stdev=10291.42 00:38:38.508 clat (msec): min=30, max=280, avg=77.97, stdev=61.03 00:38:38.508 lat (msec): min=30, max=280, avg=78.00, stdev=61.02 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.508 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.508 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.508 | 99.00th=[ 255], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:38:38.508 | 99.99th=[ 279] 00:38:38.508 bw ( KiB/s): min= 255, max= 1536, per=3.87%, avg=781.42, stdev=531.98, samples=19 00:38:38.508 iops : min= 63, max= 384, avg=195.32, stdev=133.04, samples=19 00:38:38.508 lat (msec) : 50=75.68%, 100=0.10%, 250=23.05%, 500=1.17% 00:38:38.508 cpu : usr=97.88%, sys=1.67%, ctx=22, majf=0, minf=1633 00:38:38.508 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:38.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.508 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.508 filename0: (groupid=0, jobs=1): err= 0: pid=884266: Sat Jul 13 05:27:44 2024 00:38:38.508 read: IOPS=244, BW=980KiB/s (1003kB/s)(9808KiB/10013msec) 00:38:38.508 slat (usec): min=11, max=139, avg=21.66, stdev=13.74 00:38:38.508 clat (msec): min=15, max=346, avg=65.19, stdev=62.10 00:38:38.508 lat (msec): min=15, max=347, avg=65.22, stdev=62.11 00:38:38.508 clat percentiles (msec): 00:38:38.508 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 31], 00:38:38.508 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 36], 60.00th=[ 44], 00:38:38.508 | 70.00th=[ 45], 80.00th=[ 120], 90.00th=[ 174], 95.00th=[ 203], 00:38:38.508 | 99.00th=[ 264], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 347], 00:38:38.508 | 99.99th=[ 347] 00:38:38.508 bw ( KiB/s): min= 256, max= 1920, per=4.59%, avg=927.16, stdev=732.52, samples=19 00:38:38.508 iops : min= 64, max= 480, avg=231.79, stdev=183.13, samples=19 00:38:38.509 lat (msec) : 20=1.35%, 50=72.88%, 100=5.55%, 250=19.00%, 500=1.22% 00:38:38.509 cpu : usr=94.90%, sys=3.06%, ctx=168, majf=0, minf=1635 00:38:38.509 IO depths : 1=0.6%, 2=2.0%, 4=9.0%, 8=75.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename0: (groupid=0, jobs=1): err= 0: pid=884267: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=204, BW=817KiB/s (837kB/s)(8192KiB/10024msec) 00:38:38.509 slat (nsec): min=12154, max=76771, avg=29847.52, stdev=11511.78 00:38:38.509 clat (msec): min=25, max=285, avg=78.06, stdev=60.76 00:38:38.509 lat (msec): min=25, max=285, avg=78.09, stdev=60.75 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.509 | 70.00th=[ 46], 80.00th=[ 163], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.509 | 99.00th=[ 211], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:38:38.509 | 99.99th=[ 284] 00:38:38.509 bw ( KiB/s): min= 256, max= 1520, per=3.87%, avg=781.47, stdev=530.85, samples=19 00:38:38.509 iops : min= 64, max= 380, avg=195.37, stdev=132.71, samples=19 00:38:38.509 lat (msec) : 50=75.39%, 100=0.39%, 250=23.34%, 500=0.88% 00:38:38.509 cpu : usr=97.83%, sys=1.72%, ctx=18, majf=0, minf=1636 00:38:38.509 IO depths : 1=2.1%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename0: (groupid=0, jobs=1): err= 0: pid=884268: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=205, BW=824KiB/s (844kB/s)(8256KiB/10020msec) 00:38:38.509 slat (usec): min=9, max=102, avg=34.23, stdev= 8.10 00:38:38.509 clat (msec): min=27, max=270, avg=77.36, stdev=57.91 00:38:38.509 lat (msec): min=27, max=270, avg=77.40, stdev=57.91 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.509 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 199], 00:38:38.509 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 245], 99.95th=[ 271], 00:38:38.509 | 99.99th=[ 271] 00:38:38.509 bw ( KiB/s): min= 256, max= 1536, per=3.90%, avg=788.21, stdev=526.11, samples=19 00:38:38.509 iops : min= 64, max= 384, avg=197.05, stdev=131.53, samples=19 00:38:38.509 lat (msec) : 50=74.32%, 100=0.97%, 250=24.61%, 500=0.10% 00:38:38.509 cpu : usr=97.84%, sys=1.57%, ctx=103, majf=0, minf=1634 00:38:38.509 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename1: (groupid=0, jobs=1): err= 0: pid=884269: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=219, BW=876KiB/s (897kB/s)(8792KiB/10033msec) 00:38:38.509 slat (nsec): min=7402, max=96879, avg=21512.91, stdev=13637.53 00:38:38.509 clat (msec): min=30, max=221, avg=72.83, stdev=46.05 00:38:38.509 lat (msec): min=30, max=221, avg=72.86, stdev=46.06 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.509 | 70.00th=[ 48], 80.00th=[ 130], 90.00th=[ 150], 95.00th=[ 169], 00:38:38.509 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 222], 00:38:38.509 | 99.99th=[ 222] 00:38:38.509 bw ( KiB/s): min= 384, max= 1539, per=4.32%, avg=873.15, stdev=489.88, samples=20 00:38:38.509 iops : min= 96, max= 384, avg=218.20, stdev=122.36, samples=20 00:38:38.509 lat (msec) : 50=70.61%, 100=2.46%, 250=26.93% 00:38:38.509 cpu : usr=97.97%, sys=1.46%, ctx=88, majf=0, minf=1635 00:38:38.509 IO depths : 1=5.2%, 2=10.7%, 4=22.6%, 8=54.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename1: (groupid=0, jobs=1): err= 0: pid=884270: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=215, BW=862KiB/s (882kB/s)(8652KiB/10041msec) 00:38:38.509 slat (usec): min=9, max=151, avg=26.33, stdev=13.29 00:38:38.509 clat (msec): min=13, max=212, avg=73.77, stdev=50.82 00:38:38.509 lat (msec): min=13, max=212, avg=73.79, stdev=50.83 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.509 | 70.00th=[ 47], 80.00th=[ 136], 90.00th=[ 167], 95.00th=[ 176], 00:38:38.509 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 211], 99.95th=[ 213], 00:38:38.509 | 99.99th=[ 213] 00:38:38.509 bw ( KiB/s): min= 256, max= 1536, per=4.28%, avg=865.10, stdev=512.14, samples=20 00:38:38.509 iops : min= 64, max= 384, avg=216.20, stdev=127.95, samples=20 00:38:38.509 lat (msec) : 20=0.74%, 50=70.97%, 100=2.59%, 250=25.71% 00:38:38.509 cpu : usr=97.82%, sys=1.63%, ctx=23, majf=0, minf=1637 00:38:38.509 IO depths : 1=5.2%, 2=11.0%, 4=23.9%, 8=52.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename1: (groupid=0, jobs=1): err= 0: pid=884271: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=204, BW=818KiB/s (837kB/s)(8192KiB/10020msec) 00:38:38.509 slat (usec): min=11, max=164, avg=62.81, stdev=11.78 00:38:38.509 clat (msec): min=29, max=316, avg=77.70, stdev=60.75 00:38:38.509 lat (msec): min=29, max=316, avg=77.76, stdev=60.75 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.509 | 70.00th=[ 46], 80.00th=[ 163], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.509 | 99.00th=[ 211], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 317], 00:38:38.509 | 99.99th=[ 317] 00:38:38.509 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=531.02, samples=19 00:38:38.509 iops : min= 64, max= 384, avg=195.37, stdev=132.75, samples=19 00:38:38.509 lat (msec) : 50=75.68%, 100=0.10%, 250=23.44%, 500=0.78% 00:38:38.509 cpu : usr=97.13%, sys=1.85%, ctx=127, majf=0, minf=1636 00:38:38.509 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.509 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.509 filename1: (groupid=0, jobs=1): err= 0: pid=884272: Sat Jul 13 05:27:44 2024 00:38:38.509 read: IOPS=204, BW=818KiB/s (837kB/s)(8192KiB/10018msec) 00:38:38.509 slat (nsec): min=6104, max=90275, avg=58278.25, stdev=8524.53 00:38:38.509 clat (msec): min=36, max=289, avg=77.74, stdev=59.54 00:38:38.509 lat (msec): min=36, max=289, avg=77.80, stdev=59.54 00:38:38.509 clat percentiles (msec): 00:38:38.509 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.509 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.509 | 70.00th=[ 46], 80.00th=[ 163], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.509 | 99.00th=[ 211], 99.50th=[ 215], 99.90th=[ 284], 99.95th=[ 288], 00:38:38.509 | 99.99th=[ 288] 00:38:38.509 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=532.54, samples=19 00:38:38.509 iops : min= 64, max= 384, avg=195.37, stdev=133.13, samples=19 00:38:38.509 lat (msec) : 50=75.00%, 100=0.88%, 250=23.93%, 500=0.20% 00:38:38.509 cpu : usr=94.16%, sys=3.17%, ctx=140, majf=0, minf=1636 00:38:38.509 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:38.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename1: (groupid=0, jobs=1): err= 0: pid=884273: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=205, BW=824KiB/s (844kB/s)(8256KiB/10020msec) 00:38:38.510 slat (nsec): min=10296, max=97478, avg=34070.69, stdev=7385.11 00:38:38.510 clat (msec): min=27, max=243, avg=77.36, stdev=57.97 00:38:38.510 lat (msec): min=27, max=244, avg=77.39, stdev=57.97 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 46], 80.00th=[ 148], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.510 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 222], 99.95th=[ 245], 00:38:38.510 | 99.99th=[ 245] 00:38:38.510 bw ( KiB/s): min= 256, max= 1536, per=3.90%, avg=788.21, stdev=526.30, samples=19 00:38:38.510 iops : min= 64, max= 384, avg=197.05, stdev=131.57, samples=19 00:38:38.510 lat (msec) : 50=74.13%, 100=1.07%, 250=24.81% 00:38:38.510 cpu : usr=95.98%, sys=2.46%, ctx=52, majf=0, minf=1636 00:38:38.510 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename1: (groupid=0, jobs=1): err= 0: pid=884274: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=210, BW=842KiB/s (862kB/s)(8448KiB/10031msec) 00:38:38.510 slat (usec): min=7, max=128, avg=27.65, stdev=14.03 00:38:38.510 clat (msec): min=32, max=278, avg=75.76, stdev=54.19 00:38:38.510 lat (msec): min=32, max=278, avg=75.78, stdev=54.19 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 47], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 190], 00:38:38.510 | 99.00th=[ 218], 99.50th=[ 247], 99.90th=[ 275], 99.95th=[ 279], 00:38:38.510 | 99.99th=[ 279] 00:38:38.510 bw ( KiB/s): min= 256, max= 1536, per=4.15%, avg=838.40, stdev=521.01, samples=20 00:38:38.510 iops : min= 64, max= 384, avg=209.60, stdev=130.25, samples=20 00:38:38.510 lat (msec) : 50=72.82%, 100=1.33%, 250=25.47%, 500=0.38% 00:38:38.510 cpu : usr=96.58%, sys=2.07%, ctx=77, majf=0, minf=1635 00:38:38.510 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename1: (groupid=0, jobs=1): err= 0: pid=884275: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=207, BW=831KiB/s (851kB/s)(8320KiB/10008msec) 00:38:38.510 slat (usec): min=6, max=132, avg=19.94, stdev=12.73 00:38:38.510 clat (msec): min=43, max=210, avg=76.79, stdev=55.52 00:38:38.510 lat (msec): min=43, max=210, avg=76.81, stdev=55.52 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 47], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 199], 00:38:38.510 | 99.00th=[ 207], 99.50th=[ 211], 99.90th=[ 211], 99.95th=[ 211], 00:38:38.510 | 99.99th=[ 211] 00:38:38.510 bw ( KiB/s): min= 272, max= 1536, per=3.93%, avg=794.95, stdev=517.99, samples=19 00:38:38.510 iops : min= 68, max= 384, avg=198.74, stdev=129.50, samples=19 00:38:38.510 lat (msec) : 50=73.08%, 100=2.12%, 250=24.81% 00:38:38.510 cpu : usr=94.88%, sys=3.09%, ctx=77, majf=0, minf=1636 00:38:38.510 IO depths : 1=5.0%, 2=11.2%, 4=24.9%, 8=51.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename1: (groupid=0, jobs=1): err= 0: pid=884276: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=204, BW=818KiB/s (838kB/s)(8192KiB/10016msec) 00:38:38.510 slat (usec): min=10, max=120, avg=33.02, stdev= 7.51 00:38:38.510 clat (msec): min=40, max=209, avg=77.93, stdev=58.73 00:38:38.510 lat (msec): min=40, max=209, avg=77.97, stdev=58.73 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 192], 95.00th=[ 199], 00:38:38.510 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:38:38.510 | 99.99th=[ 209] 00:38:38.510 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=531.02, samples=19 00:38:38.510 iops : min= 64, max= 384, avg=195.37, stdev=132.75, samples=19 00:38:38.510 lat (msec) : 50=75.00%, 250=25.00% 00:38:38.510 cpu : usr=93.56%, sys=3.59%, ctx=211, majf=0, minf=1636 00:38:38.510 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename2: (groupid=0, jobs=1): err= 0: pid=884277: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=228, BW=916KiB/s (938kB/s)(9184KiB/10031msec) 00:38:38.510 slat (nsec): min=4088, max=74057, avg=26866.76, stdev=12959.75 00:38:38.510 clat (msec): min=29, max=210, avg=69.68, stdev=39.35 00:38:38.510 lat (msec): min=29, max=210, avg=69.71, stdev=39.34 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 86], 80.00th=[ 125], 90.00th=[ 134], 95.00th=[ 144], 00:38:38.510 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 211], 00:38:38.510 | 99.99th=[ 211] 00:38:38.510 bw ( KiB/s): min= 432, max= 1536, per=4.52%, avg=912.00, stdev=453.38, samples=20 00:38:38.510 iops : min= 108, max= 384, avg=228.00, stdev=113.35, samples=20 00:38:38.510 lat (msec) : 50=68.12%, 100=4.27%, 250=27.61% 00:38:38.510 cpu : usr=97.88%, sys=1.68%, ctx=17, majf=0, minf=1637 00:38:38.510 IO depths : 1=4.3%, 2=8.9%, 4=19.9%, 8=58.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=92.6%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename2: (groupid=0, jobs=1): err= 0: pid=884278: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=207, BW=829KiB/s (849kB/s)(8320KiB/10032msec) 00:38:38.510 slat (usec): min=12, max=287, avg=59.82, stdev=17.36 00:38:38.510 clat (msec): min=26, max=221, avg=76.63, stdev=57.71 00:38:38.510 lat (msec): min=26, max=221, avg=76.69, stdev=57.71 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.510 | 70.00th=[ 46], 80.00th=[ 148], 90.00th=[ 190], 95.00th=[ 199], 00:38:38.510 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 222], 99.95th=[ 222], 00:38:38.510 | 99.99th=[ 222] 00:38:38.510 bw ( KiB/s): min= 256, max= 1536, per=4.09%, avg=825.60, stdev=527.54, samples=20 00:38:38.510 iops : min= 64, max= 384, avg=206.40, stdev=131.88, samples=20 00:38:38.510 lat (msec) : 50=74.33%, 100=1.73%, 250=23.94% 00:38:38.510 cpu : usr=95.42%, sys=2.70%, ctx=84, majf=0, minf=1637 00:38:38.510 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:38.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.510 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.510 filename2: (groupid=0, jobs=1): err= 0: pid=884279: Sat Jul 13 05:27:44 2024 00:38:38.510 read: IOPS=204, BW=818KiB/s (838kB/s)(8192KiB/10014msec) 00:38:38.510 slat (nsec): min=12157, max=82383, avg=34917.40, stdev=11520.11 00:38:38.510 clat (msec): min=25, max=309, avg=77.94, stdev=62.09 00:38:38.510 lat (msec): min=25, max=309, avg=77.97, stdev=62.09 00:38:38.510 clat percentiles (msec): 00:38:38.510 | 1.00th=[ 30], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.510 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.511 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 194], 95.00th=[ 203], 00:38:38.511 | 99.00th=[ 268], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:38:38.511 | 99.99th=[ 309] 00:38:38.511 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=541.20, samples=19 00:38:38.511 iops : min= 64, max= 384, avg=195.37, stdev=135.30, samples=19 00:38:38.511 lat (msec) : 50=75.20%, 100=0.68%, 250=22.75%, 500=1.37% 00:38:38.511 cpu : usr=95.57%, sys=2.59%, ctx=113, majf=0, minf=1634 00:38:38.511 IO depths : 1=5.3%, 2=11.5%, 4=24.9%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.511 filename2: (groupid=0, jobs=1): err= 0: pid=884280: Sat Jul 13 05:27:44 2024 00:38:38.511 read: IOPS=204, BW=817KiB/s (837kB/s)(8192KiB/10021msec) 00:38:38.511 slat (usec): min=11, max=108, avg=32.51, stdev= 9.99 00:38:38.511 clat (msec): min=30, max=285, avg=77.99, stdev=60.78 00:38:38.511 lat (msec): min=30, max=285, avg=78.02, stdev=60.79 00:38:38.511 clat percentiles (msec): 00:38:38.511 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.511 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.511 | 70.00th=[ 47], 80.00th=[ 163], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.511 | 99.00th=[ 232], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:38:38.511 | 99.99th=[ 284] 00:38:38.511 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=531.02, samples=19 00:38:38.511 iops : min= 64, max= 384, avg=195.37, stdev=132.75, samples=19 00:38:38.511 lat (msec) : 50=75.20%, 100=0.59%, 250=23.34%, 500=0.88% 00:38:38.511 cpu : usr=97.06%, sys=2.06%, ctx=21, majf=0, minf=1634 00:38:38.511 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.511 filename2: (groupid=0, jobs=1): err= 0: pid=884281: Sat Jul 13 05:27:44 2024 00:38:38.511 read: IOPS=218, BW=875KiB/s (896kB/s)(8776KiB/10031msec) 00:38:38.511 slat (usec): min=8, max=211, avg=30.77, stdev=19.14 00:38:38.511 clat (msec): min=27, max=220, avg=72.88, stdev=51.71 00:38:38.511 lat (msec): min=27, max=220, avg=72.91, stdev=51.72 00:38:38.511 clat percentiles (msec): 00:38:38.511 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.511 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.511 | 70.00th=[ 46], 80.00th=[ 136], 90.00th=[ 169], 95.00th=[ 180], 00:38:38.511 | 99.00th=[ 201], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 220], 00:38:38.511 | 99.99th=[ 220] 00:38:38.511 bw ( KiB/s): min= 256, max= 1536, per=4.31%, avg=871.20, stdev=525.03, samples=20 00:38:38.511 iops : min= 64, max= 384, avg=217.80, stdev=131.26, samples=20 00:38:38.511 lat (msec) : 50=72.74%, 100=2.37%, 250=24.89% 00:38:38.511 cpu : usr=94.62%, sys=3.12%, ctx=132, majf=0, minf=1637 00:38:38.511 IO depths : 1=5.1%, 2=10.6%, 4=22.7%, 8=54.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.511 filename2: (groupid=0, jobs=1): err= 0: pid=884282: Sat Jul 13 05:27:44 2024 00:38:38.511 read: IOPS=204, BW=818KiB/s (837kB/s)(8192KiB/10020msec) 00:38:38.511 slat (usec): min=4, max=105, avg=62.45, stdev=11.11 00:38:38.511 clat (msec): min=30, max=316, avg=77.72, stdev=61.04 00:38:38.511 lat (msec): min=30, max=316, avg=77.78, stdev=61.04 00:38:38.511 clat percentiles (msec): 00:38:38.511 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.511 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.511 | 70.00th=[ 46], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.511 | 99.00th=[ 251], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 317], 00:38:38.511 | 99.99th=[ 317] 00:38:38.511 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=531.93, samples=19 00:38:38.511 iops : min= 64, max= 384, avg=195.37, stdev=132.98, samples=19 00:38:38.511 lat (msec) : 50=75.59%, 100=0.20%, 250=23.14%, 500=1.07% 00:38:38.511 cpu : usr=97.60%, sys=1.63%, ctx=73, majf=0, minf=1636 00:38:38.511 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.511 filename2: (groupid=0, jobs=1): err= 0: pid=884283: Sat Jul 13 05:27:44 2024 00:38:38.511 read: IOPS=204, BW=816KiB/s (836kB/s)(8192KiB/10035msec) 00:38:38.511 slat (usec): min=10, max=101, avg=61.03, stdev= 9.01 00:38:38.511 clat (msec): min=36, max=306, avg=77.85, stdev=59.81 00:38:38.511 lat (msec): min=36, max=306, avg=77.91, stdev=59.80 00:38:38.511 clat percentiles (msec): 00:38:38.511 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:38:38.511 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:38:38.511 | 70.00th=[ 46], 80.00th=[ 163], 90.00th=[ 190], 95.00th=[ 203], 00:38:38.511 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 232], 99.95th=[ 305], 00:38:38.511 | 99.99th=[ 309] 00:38:38.511 bw ( KiB/s): min= 256, max= 1536, per=3.87%, avg=781.47, stdev=539.52, samples=19 00:38:38.511 iops : min= 64, max= 384, avg=195.37, stdev=134.88, samples=19 00:38:38.511 lat (msec) : 50=75.00%, 100=0.88%, 250=24.02%, 500=0.10% 00:38:38.511 cpu : usr=94.03%, sys=3.12%, ctx=188, majf=0, minf=1636 00:38:38.511 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.511 filename2: (groupid=0, jobs=1): err= 0: pid=884284: Sat Jul 13 05:27:44 2024 00:38:38.511 read: IOPS=230, BW=923KiB/s (945kB/s)(9248KiB/10018msec) 00:38:38.511 slat (usec): min=6, max=166, avg=19.24, stdev=10.68 00:38:38.511 clat (msec): min=27, max=197, avg=69.16, stdev=40.54 00:38:38.511 lat (msec): min=27, max=197, avg=69.18, stdev=40.54 00:38:38.511 clat percentiles (msec): 00:38:38.511 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 45], 00:38:38.511 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:38:38.511 | 70.00th=[ 83], 80.00th=[ 124], 90.00th=[ 138], 95.00th=[ 144], 00:38:38.511 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 199], 99.95th=[ 199], 00:38:38.511 | 99.99th=[ 199] 00:38:38.511 bw ( KiB/s): min= 432, max= 1539, per=4.55%, avg=918.75, stdev=469.77, samples=20 00:38:38.511 iops : min= 108, max= 384, avg=229.60, stdev=117.33, samples=20 00:38:38.511 lat (msec) : 50=69.03%, 100=3.63%, 250=27.34% 00:38:38.511 cpu : usr=94.79%, sys=2.87%, ctx=175, majf=0, minf=1635 00:38:38.511 IO depths : 1=4.3%, 2=8.8%, 4=19.7%, 8=59.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:38:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.511 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:38.512 00:38:38.512 Run status group 0 (all jobs): 00:38:38.512 READ: bw=19.7MiB/s (20.7MB/s), 816KiB/s-980KiB/s (836kB/s-1003kB/s), io=198MiB (208MB), run=10008-10041msec 00:38:38.771 ----------------------------------------------------- 00:38:38.771 Suppressions used: 00:38:38.771 count bytes template 00:38:38.771 45 402 /usr/src/fio/parse.c 00:38:38.771 1 8 libtcmalloc_minimal.so 00:38:38.771 1 904 libcrypto.so 00:38:38.771 ----------------------------------------------------- 00:38:38.771 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 bdev_null0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 [2024-07-13 05:27:45.222945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 bdev_null1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:38.771 { 00:38:38.771 "params": { 00:38:38.771 "name": "Nvme$subsystem", 00:38:38.771 "trtype": "$TEST_TRANSPORT", 00:38:38.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.771 "adrfam": "ipv4", 00:38:38.771 "trsvcid": "$NVMF_PORT", 00:38:38.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.771 "hdgst": ${hdgst:-false}, 00:38:38.771 "ddgst": ${ddgst:-false} 00:38:38.771 }, 00:38:38.771 "method": "bdev_nvme_attach_controller" 00:38:38.771 } 00:38:38.771 EOF 00:38:38.771 )") 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:38.771 { 00:38:38.771 "params": { 00:38:38.771 "name": "Nvme$subsystem", 00:38:38.771 "trtype": "$TEST_TRANSPORT", 00:38:38.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.771 "adrfam": "ipv4", 00:38:38.771 "trsvcid": "$NVMF_PORT", 00:38:38.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.771 "hdgst": ${hdgst:-false}, 00:38:38.771 "ddgst": ${ddgst:-false} 00:38:38.771 }, 00:38:38.771 "method": "bdev_nvme_attach_controller" 00:38:38.771 } 00:38:38.771 EOF 00:38:38.771 )") 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:38.771 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:39.030 "params": { 00:38:39.030 "name": "Nvme0", 00:38:39.030 "trtype": "tcp", 00:38:39.030 "traddr": "10.0.0.2", 00:38:39.030 "adrfam": "ipv4", 00:38:39.030 "trsvcid": "4420", 00:38:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.030 "hdgst": false, 00:38:39.030 "ddgst": false 00:38:39.030 }, 00:38:39.030 "method": "bdev_nvme_attach_controller" 00:38:39.030 },{ 00:38:39.030 "params": { 00:38:39.030 "name": "Nvme1", 00:38:39.030 "trtype": "tcp", 00:38:39.030 "traddr": "10.0.0.2", 00:38:39.030 "adrfam": "ipv4", 00:38:39.030 "trsvcid": "4420", 00:38:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:39.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:39.030 "hdgst": false, 00:38:39.030 "ddgst": false 00:38:39.030 }, 00:38:39.030 "method": "bdev_nvme_attach_controller" 00:38:39.030 }' 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:39.030 05:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.288 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:39.288 ... 00:38:39.288 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:39.288 ... 00:38:39.288 fio-3.35 00:38:39.288 Starting 4 threads 00:38:39.288 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.851 00:38:45.851 filename0: (groupid=0, jobs=1): err= 0: pid=885789: Sat Jul 13 05:27:51 2024 00:38:45.851 read: IOPS=1436, BW=11.2MiB/s (11.8MB/s)(56.1MiB/5002msec) 00:38:45.851 slat (usec): min=6, max=189, avg=20.52, stdev= 8.23 00:38:45.851 clat (usec): min=1076, max=15406, avg=5496.28, stdev=777.76 00:38:45.851 lat (usec): min=1094, max=15427, avg=5516.80, stdev=777.83 00:38:45.851 clat percentiles (usec): 00:38:45.851 | 1.00th=[ 3425], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5145], 00:38:45.851 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:38:45.851 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6587], 00:38:45.851 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[13960], 99.95th=[14091], 00:38:45.851 | 99.99th=[15401] 00:38:45.851 bw ( KiB/s): min=11136, max=12000, per=24.91%, avg=11521.78, stdev=310.81, samples=9 00:38:45.851 iops : min= 1392, max= 1500, avg=1440.22, stdev=38.85, samples=9 00:38:45.851 lat (msec) : 2=0.22%, 4=1.70%, 10=97.94%, 20=0.14% 00:38:45.851 cpu : usr=92.90%, sys=6.40%, ctx=23, majf=0, minf=1637 00:38:45.851 IO depths : 1=0.1%, 2=13.8%, 4=59.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 issued rwts: total=7186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.851 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:45.851 filename0: (groupid=0, jobs=1): err= 0: pid=885790: Sat Jul 13 05:27:51 2024 00:38:45.851 read: IOPS=1424, BW=11.1MiB/s (11.7MB/s)(55.7MiB/5003msec) 00:38:45.851 slat (nsec): min=6886, max=80157, avg=21117.00, stdev=9107.57 00:38:45.851 clat (usec): min=1103, max=14700, avg=5537.99, stdev=799.70 00:38:45.851 lat (usec): min=1123, max=14723, avg=5559.11, stdev=799.75 00:38:45.851 clat percentiles (usec): 00:38:45.851 | 1.00th=[ 3589], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5211], 00:38:45.851 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5538], 00:38:45.851 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5932], 95.00th=[ 7111], 00:38:45.851 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[11076], 99.95th=[13304], 00:38:45.851 | 99.99th=[14746] 00:38:45.851 bw ( KiB/s): min=10736, max=11824, per=24.63%, avg=11394.20, stdev=334.50, samples=10 00:38:45.851 iops : min= 1342, max= 1478, avg=1424.20, stdev=41.84, samples=10 00:38:45.851 lat (msec) : 2=0.35%, 4=1.12%, 10=98.41%, 20=0.11% 00:38:45.851 cpu : usr=93.62%, sys=5.70%, ctx=9, majf=0, minf=1634 00:38:45.851 IO depths : 1=0.1%, 2=17.0%, 4=56.3%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 issued rwts: total=7128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.851 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:45.851 filename1: (groupid=0, jobs=1): err= 0: pid=885791: Sat Jul 13 05:27:51 2024 00:38:45.851 read: IOPS=1453, BW=11.4MiB/s (11.9MB/s)(56.8MiB/5001msec) 00:38:45.851 slat (nsec): min=6745, max=80035, avg=20264.12, stdev=8939.52 00:38:45.851 clat (usec): min=1168, max=13993, avg=5432.24, stdev=671.40 00:38:45.851 lat (usec): min=1186, max=14017, avg=5452.50, stdev=671.96 00:38:45.851 clat percentiles (usec): 00:38:45.851 | 1.00th=[ 3556], 5.00th=[ 4490], 10.00th=[ 4752], 20.00th=[ 5145], 00:38:45.851 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:38:45.851 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 6063], 00:38:45.851 | 99.00th=[ 7832], 99.50th=[ 8356], 99.90th=[10814], 99.95th=[12125], 00:38:45.851 | 99.99th=[13960] 00:38:45.851 bw ( KiB/s): min=11152, max=12128, per=25.23%, avg=11671.11, stdev=381.00, samples=9 00:38:45.851 iops : min= 1394, max= 1516, avg=1458.89, stdev=47.62, samples=9 00:38:45.851 lat (msec) : 2=0.12%, 4=1.64%, 10=98.13%, 20=0.11% 00:38:45.851 cpu : usr=93.10%, sys=6.12%, ctx=12, majf=0, minf=1638 00:38:45.851 IO depths : 1=0.1%, 2=17.1%, 4=56.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.851 issued rwts: total=7269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.851 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:45.851 filename1: (groupid=0, jobs=1): err= 0: pid=885792: Sat Jul 13 05:27:51 2024 00:38:45.851 read: IOPS=1469, BW=11.5MiB/s (12.0MB/s)(57.4MiB/5004msec) 00:38:45.851 slat (usec): min=6, max=126, avg=17.83, stdev= 7.38 00:38:45.851 clat (usec): min=1404, max=10028, avg=5386.94, stdev=696.86 00:38:45.851 lat (usec): min=1438, max=10053, avg=5404.77, stdev=697.36 00:38:45.851 clat percentiles (usec): 00:38:45.851 | 1.00th=[ 3195], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 5080], 00:38:45.851 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5604], 00:38:45.851 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 6063], 00:38:45.851 | 99.00th=[ 7635], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10028], 00:38:45.851 | 99.99th=[10028] 00:38:45.851 bw ( KiB/s): min=11104, max=13056, per=25.40%, avg=11750.40, stdev=640.28, samples=10 00:38:45.851 iops : min= 1388, max= 1632, avg=1468.80, stdev=80.03, samples=10 00:38:45.851 lat (msec) : 2=0.16%, 4=3.13%, 10=96.64%, 20=0.07% 00:38:45.851 cpu : usr=93.38%, sys=6.04%, ctx=10, majf=0, minf=1637 00:38:45.851 IO depths : 1=0.1%, 2=11.8%, 4=61.2%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.852 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.852 issued rwts: total=7352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:45.852 00:38:45.852 Run status group 0 (all jobs): 00:38:45.852 READ: bw=45.2MiB/s (47.4MB/s), 11.1MiB/s-11.5MiB/s (11.7MB/s-12.0MB/s), io=226MiB (237MB), run=5001-5004msec 00:38:46.420 ----------------------------------------------------- 00:38:46.420 Suppressions used: 00:38:46.420 count bytes template 00:38:46.420 6 52 /usr/src/fio/parse.c 00:38:46.420 1 8 libtcmalloc_minimal.so 00:38:46.420 1 904 libcrypto.so 00:38:46.420 ----------------------------------------------------- 00:38:46.420 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.420 00:38:46.420 real 0m27.881s 00:38:46.420 user 4m32.308s 00:38:46.420 sys 0m9.034s 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 ************************************ 00:38:46.420 END TEST fio_dif_rand_params 00:38:46.420 ************************************ 00:38:46.420 05:27:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:46.420 05:27:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:46.420 05:27:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:46.420 05:27:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:46.420 05:27:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:46.420 ************************************ 00:38:46.420 START TEST fio_dif_digest 00:38:46.420 ************************************ 00:38:46.420 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:38:46.420 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:46.420 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:46.420 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:46.420 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:46.421 bdev_null0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:46.421 [2024-07-13 05:27:52.888014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:46.421 { 00:38:46.421 "params": { 00:38:46.421 "name": "Nvme$subsystem", 00:38:46.421 "trtype": "$TEST_TRANSPORT", 00:38:46.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.421 "adrfam": "ipv4", 00:38:46.421 "trsvcid": "$NVMF_PORT", 00:38:46.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.421 "hdgst": ${hdgst:-false}, 00:38:46.421 "ddgst": ${ddgst:-false} 00:38:46.421 }, 00:38:46.421 "method": "bdev_nvme_attach_controller" 00:38:46.421 } 00:38:46.421 EOF 00:38:46.421 )") 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:46.421 "params": { 00:38:46.421 "name": "Nvme0", 00:38:46.421 "trtype": "tcp", 00:38:46.421 "traddr": "10.0.0.2", 00:38:46.421 "adrfam": "ipv4", 00:38:46.421 "trsvcid": "4420", 00:38:46.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:46.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:46.421 "hdgst": true, 00:38:46.421 "ddgst": true 00:38:46.421 }, 00:38:46.421 "method": "bdev_nvme_attach_controller" 00:38:46.421 }' 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:46.421 05:27:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:46.678 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:46.678 ... 00:38:46.678 fio-3.35 00:38:46.678 Starting 3 threads 00:38:46.934 EAL: No free 2048 kB hugepages reported on node 1 00:38:59.134 00:38:59.134 filename0: (groupid=0, jobs=1): err= 0: pid=886776: Sat Jul 13 05:28:04 2024 00:38:59.134 read: IOPS=143, BW=17.9MiB/s (18.8MB/s)(180MiB/10050msec) 00:38:59.134 slat (nsec): min=5708, max=50506, avg=23701.59, stdev=4229.32 00:38:59.134 clat (usec): min=14367, max=99968, avg=20849.63, stdev=10441.87 00:38:59.134 lat (usec): min=14392, max=99989, avg=20873.33, stdev=10441.85 00:38:59.134 clat percentiles (msec): 00:38:59.134 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:38:59.134 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 19], 00:38:59.134 | 70.00th=[ 20], 80.00th=[ 20], 90.00th=[ 21], 95.00th=[ 58], 00:38:59.134 | 99.00th=[ 61], 99.50th=[ 61], 99.90th=[ 100], 99.95th=[ 101], 00:38:59.134 | 99.99th=[ 101] 00:38:59.134 bw ( KiB/s): min=14080, max=21504, per=28.25%, avg=18419.20, stdev=2098.70, samples=20 00:38:59.134 iops : min= 110, max= 168, avg=143.90, stdev=16.40, samples=20 00:38:59.134 lat (msec) : 20=85.99%, 50=7.70%, 100=6.31% 00:38:59.134 cpu : usr=94.87%, sys=4.40%, ctx=17, majf=0, minf=1636 00:38:59.134 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.134 issued rwts: total=1442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.134 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.134 filename0: (groupid=0, jobs=1): err= 0: pid=886777: Sat Jul 13 05:28:04 2024 00:38:59.134 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10047msec) 00:38:59.134 slat (nsec): min=6536, max=45431, avg=23779.38, stdev=3831.01 00:38:59.134 clat (usec): min=10354, max=55184, avg=16408.87, stdev=2451.56 00:38:59.134 lat (usec): min=10375, max=55206, avg=16432.65, stdev=2451.38 00:38:59.134 clat percentiles (usec): 00:38:59.134 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12649], 20.00th=[14877], 00:38:59.134 | 30.00th=[15795], 40.00th=[16319], 50.00th=[16909], 60.00th=[17171], 00:38:59.134 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19268], 00:38:59.134 | 99.00th=[20055], 99.50th=[20579], 99.90th=[48497], 99.95th=[55313], 00:38:59.134 | 99.99th=[55313] 00:38:59.134 bw ( KiB/s): min=21504, max=25088, per=35.90%, avg=23408.95, stdev=1041.61, samples=20 00:38:59.134 iops : min= 168, max= 196, avg=182.85, stdev= 8.18, samples=20 00:38:59.134 lat (msec) : 20=98.63%, 50=1.31%, 100=0.05% 00:38:59.134 cpu : usr=94.43%, sys=4.99%, ctx=21, majf=0, minf=1638 00:38:59.134 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.134 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.134 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.134 filename0: (groupid=0, jobs=1): err= 0: pid=886778: Sat Jul 13 05:28:04 2024 00:38:59.134 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(231MiB/10048msec) 00:38:59.134 slat (nsec): min=4995, max=48168, avg=22436.90, stdev=4498.83 00:38:59.134 clat (usec): min=9266, max=58048, avg=16278.10, stdev=2641.95 00:38:59.134 lat (usec): min=9284, max=58066, avg=16300.53, stdev=2642.28 00:38:59.134 clat percentiles (usec): 00:38:59.134 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11994], 20.00th=[14746], 00:38:59.134 | 30.00th=[15926], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:38:59.134 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:38:59.134 | 99.00th=[20055], 99.50th=[20579], 99.90th=[51643], 99.95th=[57934], 00:38:59.134 | 99.99th=[57934] 00:38:59.134 bw ( KiB/s): min=21760, max=25856, per=36.19%, avg=23592.65, stdev=1107.30, samples=20 00:38:59.134 iops : min= 170, max= 202, avg=184.30, stdev= 8.66, samples=20 00:38:59.134 lat (msec) : 10=0.16%, 20=98.43%, 50=1.30%, 100=0.11% 00:38:59.134 cpu : usr=94.13%, sys=5.20%, ctx=18, majf=0, minf=1636 00:38:59.134 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.135 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.135 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.135 00:38:59.135 Run status group 0 (all jobs): 00:38:59.135 READ: bw=63.7MiB/s (66.8MB/s), 17.9MiB/s-23.0MiB/s (18.8MB/s-24.1MB/s), io=640MiB (671MB), run=10047-10050msec 00:38:59.135 ----------------------------------------------------- 00:38:59.135 Suppressions used: 00:38:59.135 count bytes template 00:38:59.135 5 44 /usr/src/fio/parse.c 00:38:59.135 1 8 libtcmalloc_minimal.so 00:38:59.135 1 904 libcrypto.so 00:38:59.135 ----------------------------------------------------- 00:38:59.135 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.135 00:38:59.135 real 0m12.308s 00:38:59.135 user 0m30.577s 00:38:59.135 sys 0m1.893s 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:59.135 05:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.135 ************************************ 00:38:59.135 END TEST fio_dif_digest 00:38:59.135 ************************************ 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:59.135 05:28:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:59.135 05:28:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:59.135 rmmod nvme_tcp 00:38:59.135 rmmod nvme_fabrics 00:38:59.135 rmmod nvme_keyring 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 879285 ']' 00:38:59.135 05:28:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 879285 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 879285 ']' 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 879285 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 879285 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 879285' 00:38:59.135 killing process with pid 879285 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@967 -- # kill 879285 00:38:59.135 05:28:05 nvmf_dif -- common/autotest_common.sh@972 -- # wait 879285 00:39:00.515 05:28:06 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:00.515 05:28:06 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:01.080 Waiting for block devices as requested 00:39:01.337 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:01.337 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:01.337 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:01.596 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:01.596 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:01.596 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:01.855 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:01.855 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:01.855 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:01.855 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:02.115 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:02.115 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:02.115 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:02.115 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:02.375 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:02.375 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:02.375 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:02.636 05:28:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:02.636 05:28:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:02.636 05:28:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:02.636 05:28:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:02.636 05:28:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.636 05:28:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:02.636 05:28:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.540 05:28:10 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:04.540 00:39:04.540 real 1m14.966s 00:39:04.540 user 6m40.564s 00:39:04.540 sys 0m20.117s 00:39:04.540 05:28:10 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:04.540 05:28:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:04.540 ************************************ 00:39:04.540 END TEST nvmf_dif 00:39:04.540 ************************************ 00:39:04.540 05:28:11 -- common/autotest_common.sh@1142 -- # return 0 00:39:04.540 05:28:11 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:04.540 05:28:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:04.540 05:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:04.540 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:39:04.540 ************************************ 00:39:04.540 START TEST nvmf_abort_qd_sizes 00:39:04.540 ************************************ 00:39:04.540 05:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:04.798 * Looking for test storage... 00:39:04.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:04.798 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:04.799 05:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:06.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:06.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:06.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:06.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:06.699 05:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:06.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:39:06.699 00:39:06.699 --- 10.0.0.2 ping statistics --- 00:39:06.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.699 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:39:06.699 00:39:06.699 --- 10.0.0.1 ping statistics --- 00:39:06.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.699 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:06.699 05:28:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:08.117 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:08.117 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:08.117 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:08.117 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:08.117 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:08.118 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:08.118 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:08.118 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:08.118 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:08.118 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:09.053 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=891816 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 891816 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 891816 ']' 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:09.053 05:28:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.053 [2024-07-13 05:28:15.533672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:09.053 [2024-07-13 05:28:15.533828] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.313 EAL: No free 2048 kB hugepages reported on node 1 00:39:09.313 [2024-07-13 05:28:15.673984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.574 [2024-07-13 05:28:15.937220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.574 [2024-07-13 05:28:15.937299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.574 [2024-07-13 05:28:15.937328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.574 [2024-07-13 05:28:15.937349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.574 [2024-07-13 05:28:15.937370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.574 [2024-07-13 05:28:15.937506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.574 [2024-07-13 05:28:15.937583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:09.574 [2024-07-13 05:28:15.937661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.574 [2024-07-13 05:28:15.937672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:10.141 05:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:10.141 ************************************ 00:39:10.141 START TEST spdk_target_abort 00:39:10.141 ************************************ 00:39:10.141 05:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:10.141 05:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:10.141 05:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:10.141 05:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:10.141 05:28:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.430 spdk_targetn1 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.430 [2024-07-13 05:28:19.385520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.430 [2024-07-13 05:28:19.430820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:13.430 05:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:13.430 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.717 Initializing NVMe Controllers 00:39:16.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:16.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:16.717 Initialization complete. Launching workers. 00:39:16.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7264, failed: 0 00:39:16.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1194, failed to submit 6070 00:39:16.717 success 745, unsuccess 449, failed 0 00:39:16.717 05:28:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:16.717 05:28:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:16.717 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.002 Initializing NVMe Controllers 00:39:20.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.002 Initialization complete. Launching workers. 00:39:20.002 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8489, failed: 0 00:39:20.002 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1223, failed to submit 7266 00:39:20.002 success 309, unsuccess 914, failed 0 00:39:20.002 05:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.002 05:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.002 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.322 Initializing NVMe Controllers 00:39:23.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.322 Initialization complete. Launching workers. 00:39:23.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27431, failed: 0 00:39:23.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2693, failed to submit 24738 00:39:23.322 success 217, unsuccess 2476, failed 0 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.322 05:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 891816 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 891816 ']' 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 891816 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:24.697 05:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891816 00:39:24.697 05:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:24.697 05:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:24.697 05:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891816' 00:39:24.697 killing process with pid 891816 00:39:24.697 05:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 891816 00:39:24.697 05:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 891816 00:39:25.634 00:39:25.634 real 0m15.565s 00:39:25.634 user 0m59.049s 00:39:25.634 sys 0m3.031s 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.634 ************************************ 00:39:25.634 END TEST spdk_target_abort 00:39:25.634 ************************************ 00:39:25.634 05:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:25.634 05:28:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:25.634 05:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:25.634 05:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:25.634 05:28:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:25.634 ************************************ 00:39:25.634 START TEST kernel_target_abort 00:39:25.634 ************************************ 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:25.634 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:25.891 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:25.891 05:28:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:26.824 Waiting for block devices as requested 00:39:26.824 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:27.081 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:27.081 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:27.340 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:27.340 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:27.340 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:27.340 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:27.598 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:27.598 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:27.598 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:27.598 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:27.857 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:27.857 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:27.857 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:27.857 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:28.117 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:28.117 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:28.684 No valid GPT data, bailing 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:28.684 05:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:39:28.684 00:39:28.684 Discovery Log Number of Records 2, Generation counter 2 00:39:28.684 =====Discovery Log Entry 0====== 00:39:28.684 trtype: tcp 00:39:28.684 adrfam: ipv4 00:39:28.684 subtype: current discovery subsystem 00:39:28.684 treq: not specified, sq flow control disable supported 00:39:28.684 portid: 1 00:39:28.684 trsvcid: 4420 00:39:28.684 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:28.684 traddr: 10.0.0.1 00:39:28.684 eflags: none 00:39:28.684 sectype: none 00:39:28.684 =====Discovery Log Entry 1====== 00:39:28.684 trtype: tcp 00:39:28.684 adrfam: ipv4 00:39:28.684 subtype: nvme subsystem 00:39:28.684 treq: not specified, sq flow control disable supported 00:39:28.684 portid: 1 00:39:28.684 trsvcid: 4420 00:39:28.684 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:28.684 traddr: 10.0.0.1 00:39:28.684 eflags: none 00:39:28.684 sectype: none 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:28.684 05:28:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.684 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.972 Initializing NVMe Controllers 00:39:31.972 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:31.972 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:31.972 Initialization complete. Launching workers. 00:39:31.972 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29162, failed: 0 00:39:31.972 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29162, failed to submit 0 00:39:31.972 success 0, unsuccess 29162, failed 0 00:39:31.972 05:28:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:31.972 05:28:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:31.972 EAL: No free 2048 kB hugepages reported on node 1 00:39:35.260 Initializing NVMe Controllers 00:39:35.260 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.260 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.260 Initialization complete. Launching workers. 00:39:35.260 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55631, failed: 0 00:39:35.260 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14018, failed to submit 41613 00:39:35.260 success 0, unsuccess 14018, failed 0 00:39:35.260 05:28:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:35.260 05:28:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.547 Initializing NVMe Controllers 00:39:38.547 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:38.547 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:38.547 Initialization complete. Launching workers. 00:39:38.547 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54652, failed: 0 00:39:38.547 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13622, failed to submit 41030 00:39:38.547 success 0, unsuccess 13622, failed 0 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:38.547 05:28:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:39.488 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:39.488 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:39.488 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:40.433 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:40.433 00:39:40.433 real 0m14.762s 00:39:40.433 user 0m6.141s 00:39:40.433 sys 0m3.561s 00:39:40.433 05:28:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.433 05:28:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:40.433 ************************************ 00:39:40.433 END TEST kernel_target_abort 00:39:40.433 ************************************ 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:40.433 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:40.433 rmmod nvme_tcp 00:39:40.433 rmmod nvme_fabrics 00:39:40.691 rmmod nvme_keyring 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 891816 ']' 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 891816 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 891816 ']' 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 891816 00:39:40.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (891816) - No such process 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 891816 is not found' 00:39:40.691 Process with pid 891816 is not found 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:40.691 05:28:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:41.628 Waiting for block devices as requested 00:39:41.628 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:41.889 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:41.889 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:42.148 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:42.148 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:42.148 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:42.148 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:42.406 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:42.406 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:42.406 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:42.406 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:42.664 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:42.664 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:42.664 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:42.664 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:42.922 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:42.922 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:42.922 05:28:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:42.922 05:28:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:42.923 05:28:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:42.923 05:28:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:42.923 05:28:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.923 05:28:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:42.923 05:28:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:45.457 05:28:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:45.457 00:39:45.457 real 0m40.349s 00:39:45.457 user 1m7.520s 00:39:45.457 sys 0m9.959s 00:39:45.457 05:28:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:45.457 05:28:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:45.457 ************************************ 00:39:45.457 END TEST nvmf_abort_qd_sizes 00:39:45.457 ************************************ 00:39:45.457 05:28:51 -- common/autotest_common.sh@1142 -- # return 0 00:39:45.457 05:28:51 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:45.457 05:28:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:45.457 05:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:45.457 05:28:51 -- common/autotest_common.sh@10 -- # set +x 00:39:45.457 ************************************ 00:39:45.457 START TEST keyring_file 00:39:45.457 ************************************ 00:39:45.457 05:28:51 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:45.457 * Looking for test storage... 00:39:45.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:45.457 05:28:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:45.457 05:28:51 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:45.457 05:28:51 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:45.457 05:28:51 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:45.457 05:28:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.457 05:28:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.457 05:28:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.457 05:28:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:45.457 05:28:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:45.457 05:28:51 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:45.457 05:28:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:45.457 05:28:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:45.457 05:28:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:45.457 05:28:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:45.457 05:28:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Kh7sMUaoOJ 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Kh7sMUaoOJ 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Kh7sMUaoOJ 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Kh7sMUaoOJ 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Fo1TJak5iJ 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:45.458 05:28:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Fo1TJak5iJ 00:39:45.458 05:28:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Fo1TJak5iJ 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Fo1TJak5iJ 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=898050 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:45.458 05:28:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 898050 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 898050 ']' 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:45.458 05:28:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:45.458 [2024-07-13 05:28:51.659137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:45.458 [2024-07-13 05:28:51.659313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898050 ] 00:39:45.458 EAL: No free 2048 kB hugepages reported on node 1 00:39:45.458 [2024-07-13 05:28:51.796591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.717 [2024-07-13 05:28:52.049358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:46.653 05:28:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:46.653 [2024-07-13 05:28:52.943271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.653 null0 00:39:46.653 [2024-07-13 05:28:52.975323] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:46.653 [2024-07-13 05:28:52.975898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:46.653 [2024-07-13 05:28:52.983330] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.653 05:28:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:46.653 [2024-07-13 05:28:52.995364] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:46.653 request: 00:39:46.653 { 00:39:46.653 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.653 "secure_channel": false, 00:39:46.653 "listen_address": { 00:39:46.653 "trtype": "tcp", 00:39:46.653 "traddr": "127.0.0.1", 00:39:46.653 "trsvcid": "4420" 00:39:46.653 }, 00:39:46.653 "method": "nvmf_subsystem_add_listener", 00:39:46.653 "req_id": 1 00:39:46.653 } 00:39:46.653 Got JSON-RPC error response 00:39:46.653 response: 00:39:46.653 { 00:39:46.653 "code": -32602, 00:39:46.653 "message": "Invalid parameters" 00:39:46.653 } 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:46.653 05:28:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:46.653 05:28:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=898186 00:39:46.653 05:28:52 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:46.653 05:28:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 898186 /var/tmp/bperf.sock 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 898186 ']' 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:46.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:46.653 05:28:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:46.653 [2024-07-13 05:28:53.074955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:46.653 [2024-07-13 05:28:53.075106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898186 ] 00:39:46.653 EAL: No free 2048 kB hugepages reported on node 1 00:39:46.911 [2024-07-13 05:28:53.203553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.170 [2024-07-13 05:28:53.440570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.735 05:28:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:47.735 05:28:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:47.735 05:28:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:47.735 05:28:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:47.735 05:28:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Fo1TJak5iJ 00:39:47.736 05:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Fo1TJak5iJ 00:39:47.994 05:28:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:47.994 05:28:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:47.994 05:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.994 05:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.994 05:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.252 05:28:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Kh7sMUaoOJ == \/\t\m\p\/\t\m\p\.\K\h\7\s\M\U\a\o\O\J ]] 00:39:48.252 05:28:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:48.252 05:28:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:48.252 05:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.252 05:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.252 05:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:48.510 05:28:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Fo1TJak5iJ == \/\t\m\p\/\t\m\p\.\F\o\1\T\J\a\k\5\i\J ]] 00:39:48.510 05:28:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:48.510 05:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:48.510 05:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.510 05:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.510 05:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.510 05:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.767 05:28:55 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:48.767 05:28:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:48.767 05:28:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:48.767 05:28:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.767 05:28:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.767 05:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.767 05:28:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:49.025 05:28:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:49.025 05:28:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.025 05:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.284 [2024-07-13 05:28:55.716042] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:49.542 nvme0n1 00:39:49.542 05:28:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:49.542 05:28:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.542 05:28:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.542 05:28:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.542 05:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.542 05:28:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.800 05:28:56 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:49.800 05:28:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:49.800 05:28:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:49.800 05:28:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.800 05:28:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.800 05:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.800 05:28:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.058 05:28:56 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:50.058 05:28:56 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:50.058 Running I/O for 1 seconds... 00:39:50.992 00:39:50.992 Latency(us) 00:39:50.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.992 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:50.992 nvme0n1 : 1.02 4819.96 18.83 0.00 0.00 26343.79 6262.33 37088.52 00:39:50.992 =================================================================================================================== 00:39:50.992 Total : 4819.96 18.83 0.00 0.00 26343.79 6262.33 37088.52 00:39:50.992 0 00:39:50.992 05:28:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:50.992 05:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:51.249 05:28:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:51.249 05:28:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:51.249 05:28:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.249 05:28:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.250 05:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.250 05:28:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:51.507 05:28:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:51.507 05:28:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:51.507 05:28:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:51.507 05:28:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.507 05:28:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.507 05:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.507 05:28:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:51.765 05:28:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:51.765 05:28:58 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.765 05:28:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:51.765 05:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.023 [2024-07-13 05:28:58.517647] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:52.023 [2024-07-13 05:28:58.518456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:39:52.023 [2024-07-13 05:28:58.519425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:39:52.023 [2024-07-13 05:28:58.520417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:52.023 [2024-07-13 05:28:58.520452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:52.023 [2024-07-13 05:28:58.520484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:52.279 request: 00:39:52.279 { 00:39:52.279 "name": "nvme0", 00:39:52.279 "trtype": "tcp", 00:39:52.279 "traddr": "127.0.0.1", 00:39:52.279 "adrfam": "ipv4", 00:39:52.279 "trsvcid": "4420", 00:39:52.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:52.279 "prchk_reftag": false, 00:39:52.279 "prchk_guard": false, 00:39:52.279 "hdgst": false, 00:39:52.279 "ddgst": false, 00:39:52.279 "psk": "key1", 00:39:52.279 "method": "bdev_nvme_attach_controller", 00:39:52.279 "req_id": 1 00:39:52.279 } 00:39:52.279 Got JSON-RPC error response 00:39:52.279 response: 00:39:52.279 { 00:39:52.279 "code": -5, 00:39:52.279 "message": "Input/output error" 00:39:52.279 } 00:39:52.279 05:28:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:52.279 05:28:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:52.279 05:28:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:52.279 05:28:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:52.279 05:28:58 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:52.279 05:28:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.279 05:28:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.279 05:28:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.279 05:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.279 05:28:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.537 05:28:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:52.537 05:28:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:52.537 05:28:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:52.537 05:28:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.537 05:28:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.537 05:28:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:52.537 05:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.537 05:28:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:52.537 05:28:59 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:52.537 05:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:52.794 05:28:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:52.794 05:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:53.053 05:28:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:53.053 05:28:59 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:53.053 05:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.310 05:28:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:53.310 05:28:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Kh7sMUaoOJ 00:39:53.310 05:28:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:53.310 05:28:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.310 05:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.567 [2024-07-13 05:29:00.028454] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Kh7sMUaoOJ': 0100660 00:39:53.567 [2024-07-13 05:29:00.028518] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:53.567 request: 00:39:53.567 { 00:39:53.567 "name": "key0", 00:39:53.567 "path": "/tmp/tmp.Kh7sMUaoOJ", 00:39:53.567 "method": "keyring_file_add_key", 00:39:53.567 "req_id": 1 00:39:53.567 } 00:39:53.567 Got JSON-RPC error response 00:39:53.567 response: 00:39:53.567 { 00:39:53.567 "code": -1, 00:39:53.567 "message": "Operation not permitted" 00:39:53.567 } 00:39:53.567 05:29:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:53.567 05:29:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:53.567 05:29:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:53.567 05:29:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:53.567 05:29:00 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Kh7sMUaoOJ 00:39:53.567 05:29:00 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.567 05:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Kh7sMUaoOJ 00:39:53.825 05:29:00 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Kh7sMUaoOJ 00:39:53.825 05:29:00 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:53.825 05:29:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:53.825 05:29:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.825 05:29:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.825 05:29:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.825 05:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.083 05:29:00 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:54.083 05:29:00 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:54.083 05:29:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.083 05:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.341 [2024-07-13 05:29:00.806633] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Kh7sMUaoOJ': No such file or directory 00:39:54.341 [2024-07-13 05:29:00.806690] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:54.341 [2024-07-13 05:29:00.806742] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:54.341 [2024-07-13 05:29:00.806760] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:54.341 [2024-07-13 05:29:00.806778] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:54.341 request: 00:39:54.341 { 00:39:54.341 "name": "nvme0", 00:39:54.341 "trtype": "tcp", 00:39:54.341 "traddr": "127.0.0.1", 00:39:54.341 "adrfam": "ipv4", 00:39:54.341 "trsvcid": "4420", 00:39:54.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:54.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:54.341 "prchk_reftag": false, 00:39:54.341 "prchk_guard": false, 00:39:54.341 "hdgst": false, 00:39:54.341 "ddgst": false, 00:39:54.341 "psk": "key0", 00:39:54.341 "method": "bdev_nvme_attach_controller", 00:39:54.341 "req_id": 1 00:39:54.341 } 00:39:54.341 Got JSON-RPC error response 00:39:54.341 response: 00:39:54.341 { 00:39:54.341 "code": -19, 00:39:54.341 "message": "No such device" 00:39:54.341 } 00:39:54.341 05:29:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:54.341 05:29:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:54.341 05:29:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:54.341 05:29:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:54.341 05:29:00 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:54.341 05:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:54.598 05:29:01 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h7HqMekOnI 00:39:54.598 05:29:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:54.598 05:29:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:54.855 05:29:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h7HqMekOnI 00:39:54.855 05:29:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h7HqMekOnI 00:39:54.855 05:29:01 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.h7HqMekOnI 00:39:54.855 05:29:01 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h7HqMekOnI 00:39:54.855 05:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h7HqMekOnI 00:39:54.855 05:29:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.855 05:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.422 nvme0n1 00:39:55.422 05:29:01 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:55.422 05:29:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:55.422 05:29:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.422 05:29:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.422 05:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.422 05:29:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:55.680 05:29:01 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:55.680 05:29:01 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:55.680 05:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:55.680 05:29:02 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:55.680 05:29:02 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:55.680 05:29:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.680 05:29:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:55.680 05:29:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.938 05:29:02 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:55.938 05:29:02 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:55.938 05:29:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:55.938 05:29:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.938 05:29:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.938 05:29:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.938 05:29:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.198 05:29:02 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:56.198 05:29:02 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:56.199 05:29:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:56.456 05:29:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:56.456 05:29:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.456 05:29:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:56.713 05:29:03 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:56.713 05:29:03 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h7HqMekOnI 00:39:56.713 05:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h7HqMekOnI 00:39:56.971 05:29:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Fo1TJak5iJ 00:39:56.971 05:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Fo1TJak5iJ 00:39:57.229 05:29:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.229 05:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.486 nvme0n1 00:39:57.744 05:29:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:57.744 05:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:58.002 05:29:04 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:58.002 "subsystems": [ 00:39:58.002 { 00:39:58.002 "subsystem": "keyring", 00:39:58.002 "config": [ 00:39:58.002 { 00:39:58.002 "method": "keyring_file_add_key", 00:39:58.002 "params": { 00:39:58.002 "name": "key0", 00:39:58.002 "path": "/tmp/tmp.h7HqMekOnI" 00:39:58.002 } 00:39:58.002 }, 00:39:58.002 { 00:39:58.002 "method": "keyring_file_add_key", 00:39:58.002 "params": { 00:39:58.002 "name": "key1", 00:39:58.002 "path": "/tmp/tmp.Fo1TJak5iJ" 00:39:58.002 } 00:39:58.002 } 00:39:58.002 ] 00:39:58.002 }, 00:39:58.003 { 00:39:58.003 "subsystem": "iobuf", 00:39:58.003 "config": [ 00:39:58.003 { 00:39:58.003 "method": "iobuf_set_options", 00:39:58.003 "params": { 00:39:58.003 "small_pool_count": 8192, 00:39:58.003 "large_pool_count": 1024, 00:39:58.003 "small_bufsize": 8192, 00:39:58.003 "large_bufsize": 135168 00:39:58.003 } 00:39:58.003 } 00:39:58.003 ] 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "subsystem": "sock", 00:39:58.003 "config": [ 00:39:58.003 { 00:39:58.003 "method": "sock_set_default_impl", 00:39:58.003 "params": { 00:39:58.003 "impl_name": "posix" 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "sock_impl_set_options", 00:39:58.003 "params": { 00:39:58.003 "impl_name": "ssl", 00:39:58.003 "recv_buf_size": 4096, 00:39:58.003 "send_buf_size": 4096, 00:39:58.003 "enable_recv_pipe": true, 00:39:58.003 "enable_quickack": false, 00:39:58.003 "enable_placement_id": 0, 00:39:58.003 "enable_zerocopy_send_server": true, 00:39:58.003 "enable_zerocopy_send_client": false, 00:39:58.003 "zerocopy_threshold": 0, 00:39:58.003 "tls_version": 0, 00:39:58.003 "enable_ktls": false 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "sock_impl_set_options", 00:39:58.003 "params": { 00:39:58.003 "impl_name": "posix", 00:39:58.003 "recv_buf_size": 2097152, 00:39:58.003 "send_buf_size": 2097152, 00:39:58.003 "enable_recv_pipe": true, 00:39:58.003 "enable_quickack": false, 00:39:58.003 "enable_placement_id": 0, 00:39:58.003 "enable_zerocopy_send_server": true, 00:39:58.003 "enable_zerocopy_send_client": false, 00:39:58.003 "zerocopy_threshold": 0, 00:39:58.003 "tls_version": 0, 00:39:58.003 "enable_ktls": false 00:39:58.003 } 00:39:58.003 } 00:39:58.003 ] 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "subsystem": "vmd", 00:39:58.003 "config": [] 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "subsystem": "accel", 00:39:58.003 "config": [ 00:39:58.003 { 00:39:58.003 "method": "accel_set_options", 00:39:58.003 "params": { 00:39:58.003 "small_cache_size": 128, 00:39:58.003 "large_cache_size": 16, 00:39:58.003 "task_count": 2048, 00:39:58.003 "sequence_count": 2048, 00:39:58.003 "buf_count": 2048 00:39:58.003 } 00:39:58.003 } 00:39:58.003 ] 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "subsystem": "bdev", 00:39:58.003 "config": [ 00:39:58.003 { 00:39:58.003 "method": "bdev_set_options", 00:39:58.003 "params": { 00:39:58.003 "bdev_io_pool_size": 65535, 00:39:58.003 "bdev_io_cache_size": 256, 00:39:58.003 "bdev_auto_examine": true, 00:39:58.003 "iobuf_small_cache_size": 128, 00:39:58.003 "iobuf_large_cache_size": 16 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_raid_set_options", 00:39:58.003 "params": { 00:39:58.003 "process_window_size_kb": 1024 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_iscsi_set_options", 00:39:58.003 "params": { 00:39:58.003 "timeout_sec": 30 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_nvme_set_options", 00:39:58.003 "params": { 00:39:58.003 "action_on_timeout": "none", 00:39:58.003 "timeout_us": 0, 00:39:58.003 "timeout_admin_us": 0, 00:39:58.003 "keep_alive_timeout_ms": 10000, 00:39:58.003 "arbitration_burst": 0, 00:39:58.003 "low_priority_weight": 0, 00:39:58.003 "medium_priority_weight": 0, 00:39:58.003 "high_priority_weight": 0, 00:39:58.003 "nvme_adminq_poll_period_us": 10000, 00:39:58.003 "nvme_ioq_poll_period_us": 0, 00:39:58.003 "io_queue_requests": 512, 00:39:58.003 "delay_cmd_submit": true, 00:39:58.003 "transport_retry_count": 4, 00:39:58.003 "bdev_retry_count": 3, 00:39:58.003 "transport_ack_timeout": 0, 00:39:58.003 "ctrlr_loss_timeout_sec": 0, 00:39:58.003 "reconnect_delay_sec": 0, 00:39:58.003 "fast_io_fail_timeout_sec": 0, 00:39:58.003 "disable_auto_failback": false, 00:39:58.003 "generate_uuids": false, 00:39:58.003 "transport_tos": 0, 00:39:58.003 "nvme_error_stat": false, 00:39:58.003 "rdma_srq_size": 0, 00:39:58.003 "io_path_stat": false, 00:39:58.003 "allow_accel_sequence": false, 00:39:58.003 "rdma_max_cq_size": 0, 00:39:58.003 "rdma_cm_event_timeout_ms": 0, 00:39:58.003 "dhchap_digests": [ 00:39:58.003 "sha256", 00:39:58.003 "sha384", 00:39:58.003 "sha512" 00:39:58.003 ], 00:39:58.003 "dhchap_dhgroups": [ 00:39:58.003 "null", 00:39:58.003 "ffdhe2048", 00:39:58.003 "ffdhe3072", 00:39:58.003 "ffdhe4096", 00:39:58.003 "ffdhe6144", 00:39:58.003 "ffdhe8192" 00:39:58.003 ] 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_nvme_attach_controller", 00:39:58.003 "params": { 00:39:58.003 "name": "nvme0", 00:39:58.003 "trtype": "TCP", 00:39:58.003 "adrfam": "IPv4", 00:39:58.003 "traddr": "127.0.0.1", 00:39:58.003 "trsvcid": "4420", 00:39:58.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.003 "prchk_reftag": false, 00:39:58.003 "prchk_guard": false, 00:39:58.003 "ctrlr_loss_timeout_sec": 0, 00:39:58.003 "reconnect_delay_sec": 0, 00:39:58.003 "fast_io_fail_timeout_sec": 0, 00:39:58.003 "psk": "key0", 00:39:58.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.003 "hdgst": false, 00:39:58.003 "ddgst": false 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_nvme_set_hotplug", 00:39:58.003 "params": { 00:39:58.003 "period_us": 100000, 00:39:58.003 "enable": false 00:39:58.003 } 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "method": "bdev_wait_for_examine" 00:39:58.003 } 00:39:58.003 ] 00:39:58.003 }, 00:39:58.003 { 00:39:58.003 "subsystem": "nbd", 00:39:58.003 "config": [] 00:39:58.003 } 00:39:58.003 ] 00:39:58.003 }' 00:39:58.003 05:29:04 keyring_file -- keyring/file.sh@114 -- # killprocess 898186 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 898186 ']' 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 898186 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898186 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898186' 00:39:58.003 killing process with pid 898186 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@967 -- # kill 898186 00:39:58.003 Received shutdown signal, test time was about 1.000000 seconds 00:39:58.003 00:39:58.003 Latency(us) 00:39:58.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.003 =================================================================================================================== 00:39:58.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:58.003 05:29:04 keyring_file -- common/autotest_common.sh@972 -- # wait 898186 00:39:58.938 05:29:05 keyring_file -- keyring/file.sh@117 -- # bperfpid=899783 00:39:58.938 05:29:05 keyring_file -- keyring/file.sh@119 -- # waitforlisten 899783 /var/tmp/bperf.sock 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 899783 ']' 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:58.938 05:29:05 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:58.938 05:29:05 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:58.938 "subsystems": [ 00:39:58.938 { 00:39:58.938 "subsystem": "keyring", 00:39:58.938 "config": [ 00:39:58.938 { 00:39:58.938 "method": "keyring_file_add_key", 00:39:58.938 "params": { 00:39:58.938 "name": "key0", 00:39:58.938 "path": "/tmp/tmp.h7HqMekOnI" 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "keyring_file_add_key", 00:39:58.938 "params": { 00:39:58.938 "name": "key1", 00:39:58.938 "path": "/tmp/tmp.Fo1TJak5iJ" 00:39:58.938 } 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "iobuf", 00:39:58.938 "config": [ 00:39:58.938 { 00:39:58.938 "method": "iobuf_set_options", 00:39:58.938 "params": { 00:39:58.938 "small_pool_count": 8192, 00:39:58.938 "large_pool_count": 1024, 00:39:58.938 "small_bufsize": 8192, 00:39:58.938 "large_bufsize": 135168 00:39:58.938 } 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "sock", 00:39:58.938 "config": [ 00:39:58.938 { 00:39:58.938 "method": "sock_set_default_impl", 00:39:58.938 "params": { 00:39:58.938 "impl_name": "posix" 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "sock_impl_set_options", 00:39:58.938 "params": { 00:39:58.938 "impl_name": "ssl", 00:39:58.938 "recv_buf_size": 4096, 00:39:58.938 "send_buf_size": 4096, 00:39:58.938 "enable_recv_pipe": true, 00:39:58.938 "enable_quickack": false, 00:39:58.938 "enable_placement_id": 0, 00:39:58.938 "enable_zerocopy_send_server": true, 00:39:58.938 "enable_zerocopy_send_client": false, 00:39:58.938 "zerocopy_threshold": 0, 00:39:58.938 "tls_version": 0, 00:39:58.938 "enable_ktls": false 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "sock_impl_set_options", 00:39:58.938 "params": { 00:39:58.938 "impl_name": "posix", 00:39:58.938 "recv_buf_size": 2097152, 00:39:58.938 "send_buf_size": 2097152, 00:39:58.938 "enable_recv_pipe": true, 00:39:58.938 "enable_quickack": false, 00:39:58.938 "enable_placement_id": 0, 00:39:58.938 "enable_zerocopy_send_server": true, 00:39:58.938 "enable_zerocopy_send_client": false, 00:39:58.938 "zerocopy_threshold": 0, 00:39:58.938 "tls_version": 0, 00:39:58.938 "enable_ktls": false 00:39:58.938 } 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "vmd", 00:39:58.938 "config": [] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "accel", 00:39:58.938 "config": [ 00:39:58.938 { 00:39:58.938 "method": "accel_set_options", 00:39:58.938 "params": { 00:39:58.938 "small_cache_size": 128, 00:39:58.938 "large_cache_size": 16, 00:39:58.938 "task_count": 2048, 00:39:58.938 "sequence_count": 2048, 00:39:58.938 "buf_count": 2048 00:39:58.938 } 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "bdev", 00:39:58.938 "config": [ 00:39:58.938 { 00:39:58.938 "method": "bdev_set_options", 00:39:58.938 "params": { 00:39:58.938 "bdev_io_pool_size": 65535, 00:39:58.938 "bdev_io_cache_size": 256, 00:39:58.938 "bdev_auto_examine": true, 00:39:58.938 "iobuf_small_cache_size": 128, 00:39:58.938 "iobuf_large_cache_size": 16 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_raid_set_options", 00:39:58.938 "params": { 00:39:58.938 "process_window_size_kb": 1024 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_iscsi_set_options", 00:39:58.938 "params": { 00:39:58.938 "timeout_sec": 30 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_nvme_set_options", 00:39:58.938 "params": { 00:39:58.938 "action_on_timeout": "none", 00:39:58.938 "timeout_us": 0, 00:39:58.938 "timeout_admin_us": 0, 00:39:58.938 "keep_alive_timeout_ms": 10000, 00:39:58.938 "arbitration_burst": 0, 00:39:58.938 "low_priority_weight": 0, 00:39:58.938 "medium_priority_weight": 0, 00:39:58.938 "high_priority_weight": 0, 00:39:58.938 "nvme_adminq_poll_period_us": 10000, 00:39:58.938 "nvme_ioq_poll_period_us": 0, 00:39:58.938 "io_queue_requests": 512, 00:39:58.938 "delay_cmd_submit": true, 00:39:58.938 "transport_retry_count": 4, 00:39:58.938 "bdev_retry_count": 3, 00:39:58.938 "transport_ack_timeout": 0, 00:39:58.938 "ctrlr_loss_timeout_sec": 0, 00:39:58.938 "reconnect_delay_sec": 0, 00:39:58.938 "fast_io_fail_timeout_sec": 0, 00:39:58.938 "disable_auto_failback": false, 00:39:58.938 "generate_uuids": false, 00:39:58.938 "transport_tos": 0, 00:39:58.938 "nvme_error_stat": false, 00:39:58.938 "rdma_srq_size": 0, 00:39:58.938 "io_path_stat": false, 00:39:58.938 "allow_accel_sequence": false, 00:39:58.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:58.938 "rdma_max_cq_size": 0, 00:39:58.938 "rdma_cm_event_timeout_ms": 0, 00:39:58.938 "dhchap_digests": [ 00:39:58.938 "sha256", 00:39:58.938 "sha384", 00:39:58.938 "sha512" 00:39:58.938 ], 00:39:58.938 "dhchap_dhgroups": [ 00:39:58.938 "null", 00:39:58.938 "ffdhe2048", 00:39:58.938 "ffdhe3072", 00:39:58.938 "ffdhe4096", 00:39:58.938 "ffdhe6144", 00:39:58.938 "ffdhe8192" 00:39:58.938 ] 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_nvme_attach_controller", 00:39:58.938 "params": { 00:39:58.938 "name": "nvme0", 00:39:58.938 "trtype": "TCP", 00:39:58.938 "adrfam": "IPv4", 00:39:58.938 "traddr": "127.0.0.1", 00:39:58.938 "trsvcid": "4420", 00:39:58.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.938 "prchk_reftag": false, 00:39:58.938 "prchk_guard": false, 00:39:58.938 "ctrlr_loss_timeout_sec": 0, 00:39:58.938 "reconnect_delay_sec": 0, 00:39:58.938 "fast_io_fail_timeout_sec": 0, 00:39:58.938 "psk": "key0", 00:39:58.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.938 "hdgst": false, 00:39:58.938 "ddgst": false 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_nvme_set_hotplug", 00:39:58.938 "params": { 00:39:58.938 "period_us": 100000, 00:39:58.938 "enable": false 00:39:58.938 } 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "method": "bdev_wait_for_examine" 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }, 00:39:58.938 { 00:39:58.938 "subsystem": "nbd", 00:39:58.938 "config": [] 00:39:58.938 } 00:39:58.938 ] 00:39:58.938 }' 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:58.938 05:29:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:58.938 [2024-07-13 05:29:05.428162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:58.939 [2024-07-13 05:29:05.428342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899783 ] 00:39:59.197 EAL: No free 2048 kB hugepages reported on node 1 00:39:59.197 [2024-07-13 05:29:05.550906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.455 [2024-07-13 05:29:05.779457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.713 [2024-07-13 05:29:06.204341] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:59.971 05:29:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:59.971 05:29:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:59.971 05:29:06 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:59.971 05:29:06 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:59.971 05:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.229 05:29:06 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:00.229 05:29:06 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:00.229 05:29:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:00.229 05:29:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.229 05:29:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.229 05:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.229 05:29:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:00.488 05:29:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:00.488 05:29:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:00.488 05:29:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:00.488 05:29:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.488 05:29:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.488 05:29:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:00.488 05:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.746 05:29:07 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:00.746 05:29:07 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:00.746 05:29:07 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:00.746 05:29:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:01.004 05:29:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:01.004 05:29:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:01.004 05:29:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.h7HqMekOnI /tmp/tmp.Fo1TJak5iJ 00:40:01.004 05:29:07 keyring_file -- keyring/file.sh@20 -- # killprocess 899783 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 899783 ']' 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 899783 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 899783 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 899783' 00:40:01.004 killing process with pid 899783 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@967 -- # kill 899783 00:40:01.004 Received shutdown signal, test time was about 1.000000 seconds 00:40:01.004 00:40:01.004 Latency(us) 00:40:01.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.004 =================================================================================================================== 00:40:01.004 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:01.004 05:29:07 keyring_file -- common/autotest_common.sh@972 -- # wait 899783 00:40:01.940 05:29:08 keyring_file -- keyring/file.sh@21 -- # killprocess 898050 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 898050 ']' 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 898050 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898050 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898050' 00:40:01.940 killing process with pid 898050 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@967 -- # kill 898050 00:40:01.940 [2024-07-13 05:29:08.434675] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:01.940 05:29:08 keyring_file -- common/autotest_common.sh@972 -- # wait 898050 00:40:04.472 00:40:04.472 real 0m19.335s 00:40:04.472 user 0m42.560s 00:40:04.472 sys 0m3.846s 00:40:04.472 05:29:10 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:04.472 05:29:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:04.472 ************************************ 00:40:04.472 END TEST keyring_file 00:40:04.472 ************************************ 00:40:04.472 05:29:10 -- common/autotest_common.sh@1142 -- # return 0 00:40:04.472 05:29:10 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:04.472 05:29:10 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:04.472 05:29:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:04.472 05:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:04.472 05:29:10 -- common/autotest_common.sh@10 -- # set +x 00:40:04.472 ************************************ 00:40:04.472 START TEST keyring_linux 00:40:04.472 ************************************ 00:40:04.472 05:29:10 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:04.472 * Looking for test storage... 00:40:04.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.472 05:29:10 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.472 05:29:10 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.472 05:29:10 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.472 05:29:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.472 05:29:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.472 05:29:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.472 05:29:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:04.472 05:29:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:04.472 /tmp/:spdk-test:key0 00:40:04.472 05:29:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:04.472 05:29:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:04.472 05:29:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:04.473 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:04.473 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:04.473 05:29:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:04.473 05:29:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:04.473 05:29:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:04.473 05:29:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:04.473 /tmp/:spdk-test:key1 00:40:04.473 05:29:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=900536 00:40:04.473 05:29:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:04.473 05:29:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 900536 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 900536 ']' 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:04.473 05:29:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:04.731 [2024-07-13 05:29:11.037570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:04.731 [2024-07-13 05:29:11.037721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900536 ] 00:40:04.731 EAL: No free 2048 kB hugepages reported on node 1 00:40:04.731 [2024-07-13 05:29:11.166903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.990 [2024-07-13 05:29:11.423383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:05.956 [2024-07-13 05:29:12.268188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.956 null0 00:40:05.956 [2024-07-13 05:29:12.300210] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:05.956 [2024-07-13 05:29:12.300745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:05.956 1021000170 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:05.956 113328319 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=900681 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 900681 /var/tmp/bperf.sock 00:40:05.956 05:29:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 900681 ']' 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:05.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:05.956 05:29:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:05.956 [2024-07-13 05:29:12.401050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:05.956 [2024-07-13 05:29:12.401192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900681 ] 00:40:06.214 EAL: No free 2048 kB hugepages reported on node 1 00:40:06.214 [2024-07-13 05:29:12.526095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.472 [2024-07-13 05:29:12.755804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.064 05:29:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:07.064 05:29:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:07.064 05:29:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:07.064 05:29:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:07.322 05:29:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:07.322 05:29:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:07.888 05:29:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:07.888 05:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:08.146 [2024-07-13 05:29:14.391221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:08.146 nvme0n1 00:40:08.146 05:29:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:08.146 05:29:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:08.146 05:29:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:08.146 05:29:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:08.146 05:29:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:08.146 05:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.402 05:29:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:08.402 05:29:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:08.402 05:29:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:08.402 05:29:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:08.402 05:29:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.402 05:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.402 05:29:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:08.659 05:29:14 keyring_linux -- keyring/linux.sh@25 -- # sn=1021000170 00:40:08.659 05:29:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:08.659 05:29:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:08.660 05:29:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 1021000170 == \1\0\2\1\0\0\0\1\7\0 ]] 00:40:08.660 05:29:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1021000170 00:40:08.660 05:29:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:08.660 05:29:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:08.660 Running I/O for 1 seconds... 00:40:10.033 00:40:10.033 Latency(us) 00:40:10.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.033 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:10.033 nvme0n1 : 1.02 4390.12 17.15 0.00 0.00 28867.74 11845.03 43496.49 00:40:10.033 =================================================================================================================== 00:40:10.033 Total : 4390.12 17.15 0.00 0.00 28867.74 11845.03 43496.49 00:40:10.033 0 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:10.033 05:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:10.033 05:29:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:10.033 05:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.291 05:29:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:10.291 05:29:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:10.291 05:29:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:10.291 05:29:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:10.291 05:29:16 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:10.291 05:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:10.549 [2024-07-13 05:29:16.891611] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:10.549 [2024-07-13 05:29:16.891953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:10.549 [2024-07-13 05:29:16.892909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:10.549 [2024-07-13 05:29:16.893916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:10.549 [2024-07-13 05:29:16.893950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:10.549 [2024-07-13 05:29:16.893971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:10.549 request: 00:40:10.549 { 00:40:10.549 "name": "nvme0", 00:40:10.549 "trtype": "tcp", 00:40:10.549 "traddr": "127.0.0.1", 00:40:10.549 "adrfam": "ipv4", 00:40:10.549 "trsvcid": "4420", 00:40:10.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.549 "prchk_reftag": false, 00:40:10.549 "prchk_guard": false, 00:40:10.549 "hdgst": false, 00:40:10.549 "ddgst": false, 00:40:10.549 "psk": ":spdk-test:key1", 00:40:10.549 "method": "bdev_nvme_attach_controller", 00:40:10.549 "req_id": 1 00:40:10.549 } 00:40:10.549 Got JSON-RPC error response 00:40:10.549 response: 00:40:10.549 { 00:40:10.549 "code": -5, 00:40:10.549 "message": "Input/output error" 00:40:10.549 } 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@33 -- # sn=1021000170 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1021000170 00:40:10.549 1 links removed 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@33 -- # sn=113328319 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 113328319 00:40:10.549 1 links removed 00:40:10.549 05:29:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 900681 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 900681 ']' 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 900681 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 900681 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 900681' 00:40:10.549 killing process with pid 900681 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@967 -- # kill 900681 00:40:10.549 Received shutdown signal, test time was about 1.000000 seconds 00:40:10.549 00:40:10.549 Latency(us) 00:40:10.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.549 =================================================================================================================== 00:40:10.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:10.549 05:29:16 keyring_linux -- common/autotest_common.sh@972 -- # wait 900681 00:40:11.922 05:29:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 900536 00:40:11.922 05:29:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 900536 ']' 00:40:11.922 05:29:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 900536 00:40:11.922 05:29:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:11.922 05:29:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:11.922 05:29:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 900536 00:40:11.922 05:29:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:11.922 05:29:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:11.922 05:29:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 900536' 00:40:11.922 killing process with pid 900536 00:40:11.922 05:29:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 900536 00:40:11.922 05:29:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 900536 00:40:13.821 00:40:13.821 real 0m9.496s 00:40:13.821 user 0m15.735s 00:40:13.821 sys 0m1.974s 00:40:13.821 05:29:20 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:13.821 05:29:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:13.821 ************************************ 00:40:13.821 END TEST keyring_linux 00:40:13.821 ************************************ 00:40:14.080 05:29:20 -- common/autotest_common.sh@1142 -- # return 0 00:40:14.080 05:29:20 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:14.080 05:29:20 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:14.080 05:29:20 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:14.080 05:29:20 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:14.080 05:29:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:14.080 05:29:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:14.080 05:29:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:14.080 05:29:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:14.080 05:29:20 -- common/autotest_common.sh@10 -- # set +x 00:40:14.080 05:29:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:14.080 05:29:20 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:14.080 05:29:20 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:14.080 05:29:20 -- common/autotest_common.sh@10 -- # set +x 00:40:15.983 INFO: APP EXITING 00:40:15.983 INFO: killing all VMs 00:40:15.983 INFO: killing vhost app 00:40:15.983 INFO: EXIT DONE 00:40:16.916 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:16.916 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:16.916 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:16.916 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:16.916 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:16.916 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:16.916 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:16.916 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:16.916 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:16.916 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:16.916 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:16.916 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:16.916 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:16.916 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:16.916 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:16.916 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:16.916 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:18.288 Cleaning 00:40:18.288 Removing: /var/run/dpdk/spdk0/config 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:18.288 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:18.288 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:18.288 Removing: /var/run/dpdk/spdk1/config 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:18.288 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:18.288 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:18.288 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:18.288 Removing: /var/run/dpdk/spdk2/config 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:18.288 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:18.288 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:18.288 Removing: /var/run/dpdk/spdk3/config 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:18.288 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:18.288 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:18.288 Removing: /var/run/dpdk/spdk4/config 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:18.288 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:18.288 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:18.288 Removing: /dev/shm/bdev_svc_trace.1 00:40:18.288 Removing: /dev/shm/nvmf_trace.0 00:40:18.288 Removing: /dev/shm/spdk_tgt_trace.pid551268 00:40:18.288 Removing: /var/run/dpdk/spdk0 00:40:18.288 Removing: /var/run/dpdk/spdk1 00:40:18.288 Removing: /var/run/dpdk/spdk2 00:40:18.288 Removing: /var/run/dpdk/spdk3 00:40:18.288 Removing: /var/run/dpdk/spdk4 00:40:18.288 Removing: /var/run/dpdk/spdk_pid548396 00:40:18.288 Removing: /var/run/dpdk/spdk_pid549525 00:40:18.288 Removing: /var/run/dpdk/spdk_pid551268 00:40:18.288 Removing: /var/run/dpdk/spdk_pid551998 00:40:18.288 Removing: /var/run/dpdk/spdk_pid552950 00:40:18.288 Removing: /var/run/dpdk/spdk_pid553369 00:40:18.288 Removing: /var/run/dpdk/spdk_pid554348 00:40:18.288 Removing: /var/run/dpdk/spdk_pid554607 00:40:18.288 Removing: /var/run/dpdk/spdk_pid555153 00:40:18.288 Removing: /var/run/dpdk/spdk_pid557197 00:40:18.288 Removing: /var/run/dpdk/spdk_pid558379 00:40:18.289 Removing: /var/run/dpdk/spdk_pid558962 00:40:18.289 Removing: /var/run/dpdk/spdk_pid559433 00:40:18.289 Removing: /var/run/dpdk/spdk_pid560020 00:40:18.289 Removing: /var/run/dpdk/spdk_pid560609 00:40:18.289 Removing: /var/run/dpdk/spdk_pid560805 00:40:18.289 Removing: /var/run/dpdk/spdk_pid561058 00:40:18.289 Removing: /var/run/dpdk/spdk_pid561370 00:40:18.289 Removing: /var/run/dpdk/spdk_pid561826 00:40:18.289 Removing: /var/run/dpdk/spdk_pid564565 00:40:18.289 Removing: /var/run/dpdk/spdk_pid565125 00:40:18.289 Removing: /var/run/dpdk/spdk_pid565606 00:40:18.289 Removing: /var/run/dpdk/spdk_pid565827 00:40:18.289 Removing: /var/run/dpdk/spdk_pid567062 00:40:18.289 Removing: /var/run/dpdk/spdk_pid567325 00:40:18.289 Removing: /var/run/dpdk/spdk_pid568595 00:40:18.289 Removing: /var/run/dpdk/spdk_pid568823 00:40:18.289 Removing: /var/run/dpdk/spdk_pid569262 00:40:18.289 Removing: /var/run/dpdk/spdk_pid569401 00:40:18.289 Removing: /var/run/dpdk/spdk_pid569835 00:40:18.289 Removing: /var/run/dpdk/spdk_pid569975 00:40:18.289 Removing: /var/run/dpdk/spdk_pid571012 00:40:18.289 Removing: /var/run/dpdk/spdk_pid571288 00:40:18.289 Removing: /var/run/dpdk/spdk_pid571608 00:40:18.289 Removing: /var/run/dpdk/spdk_pid572056 00:40:18.289 Removing: /var/run/dpdk/spdk_pid572333 00:40:18.289 Removing: /var/run/dpdk/spdk_pid572651 00:40:18.289 Removing: /var/run/dpdk/spdk_pid572950 00:40:18.289 Removing: /var/run/dpdk/spdk_pid573295 00:40:18.289 Removing: /var/run/dpdk/spdk_pid573643 00:40:18.289 Removing: /var/run/dpdk/spdk_pid573939 00:40:18.289 Removing: /var/run/dpdk/spdk_pid574349 00:40:18.289 Removing: /var/run/dpdk/spdk_pid574646 00:40:18.289 Removing: /var/run/dpdk/spdk_pid575045 00:40:18.289 Removing: /var/run/dpdk/spdk_pid575338 00:40:18.289 Removing: /var/run/dpdk/spdk_pid575633 00:40:18.289 Removing: /var/run/dpdk/spdk_pid576037 00:40:18.289 Removing: /var/run/dpdk/spdk_pid576331 00:40:18.289 Removing: /var/run/dpdk/spdk_pid576741 00:40:18.289 Removing: /var/run/dpdk/spdk_pid577026 00:40:18.289 Removing: /var/run/dpdk/spdk_pid577322 00:40:18.289 Removing: /var/run/dpdk/spdk_pid577732 00:40:18.289 Removing: /var/run/dpdk/spdk_pid578024 00:40:18.289 Removing: /var/run/dpdk/spdk_pid578433 00:40:18.289 Removing: /var/run/dpdk/spdk_pid578823 00:40:18.289 Removing: /var/run/dpdk/spdk_pid579255 00:40:18.289 Removing: /var/run/dpdk/spdk_pid579550 00:40:18.289 Removing: /var/run/dpdk/spdk_pid580377 00:40:18.289 Removing: /var/run/dpdk/spdk_pid580979 00:40:18.289 Removing: /var/run/dpdk/spdk_pid583434 00:40:18.289 Removing: /var/run/dpdk/spdk_pid639655 00:40:18.289 Removing: /var/run/dpdk/spdk_pid642412 00:40:18.289 Removing: /var/run/dpdk/spdk_pid649625 00:40:18.289 Removing: /var/run/dpdk/spdk_pid653055 00:40:18.289 Removing: /var/run/dpdk/spdk_pid655665 00:40:18.289 Removing: /var/run/dpdk/spdk_pid656073 00:40:18.289 Removing: /var/run/dpdk/spdk_pid660296 00:40:18.289 Removing: /var/run/dpdk/spdk_pid666124 00:40:18.289 Removing: /var/run/dpdk/spdk_pid666405 00:40:18.289 Removing: /var/run/dpdk/spdk_pid669300 00:40:18.289 Removing: /var/run/dpdk/spdk_pid673710 00:40:18.289 Removing: /var/run/dpdk/spdk_pid676176 00:40:18.289 Removing: /var/run/dpdk/spdk_pid683331 00:40:18.289 Removing: /var/run/dpdk/spdk_pid688832 00:40:18.289 Removing: /var/run/dpdk/spdk_pid690262 00:40:18.289 Removing: /var/run/dpdk/spdk_pid691064 00:40:18.289 Removing: /var/run/dpdk/spdk_pid702047 00:40:18.289 Removing: /var/run/dpdk/spdk_pid704536 00:40:18.289 Removing: /var/run/dpdk/spdk_pid730432 00:40:18.548 Removing: /var/run/dpdk/spdk_pid734011 00:40:18.548 Removing: /var/run/dpdk/spdk_pid735186 00:40:18.548 Removing: /var/run/dpdk/spdk_pid736634 00:40:18.548 Removing: /var/run/dpdk/spdk_pid736909 00:40:18.548 Removing: /var/run/dpdk/spdk_pid737185 00:40:18.548 Removing: /var/run/dpdk/spdk_pid737456 00:40:18.548 Removing: /var/run/dpdk/spdk_pid738289 00:40:18.548 Removing: /var/run/dpdk/spdk_pid739737 00:40:18.548 Removing: /var/run/dpdk/spdk_pid741004 00:40:18.548 Removing: /var/run/dpdk/spdk_pid741698 00:40:18.548 Removing: /var/run/dpdk/spdk_pid743586 00:40:18.548 Removing: /var/run/dpdk/spdk_pid744414 00:40:18.548 Removing: /var/run/dpdk/spdk_pid745236 00:40:18.548 Removing: /var/run/dpdk/spdk_pid747895 00:40:18.548 Removing: /var/run/dpdk/spdk_pid751545 00:40:18.548 Removing: /var/run/dpdk/spdk_pid755079 00:40:18.548 Removing: /var/run/dpdk/spdk_pid779762 00:40:18.548 Removing: /var/run/dpdk/spdk_pid782664 00:40:18.548 Removing: /var/run/dpdk/spdk_pid787310 00:40:18.548 Removing: /var/run/dpdk/spdk_pid788891 00:40:18.548 Removing: /var/run/dpdk/spdk_pid790522 00:40:18.548 Removing: /var/run/dpdk/spdk_pid793456 00:40:18.548 Removing: /var/run/dpdk/spdk_pid796204 00:40:18.548 Removing: /var/run/dpdk/spdk_pid800817 00:40:18.548 Removing: /var/run/dpdk/spdk_pid800826 00:40:18.548 Removing: /var/run/dpdk/spdk_pid803851 00:40:18.548 Removing: /var/run/dpdk/spdk_pid804109 00:40:18.548 Removing: /var/run/dpdk/spdk_pid804248 00:40:18.548 Removing: /var/run/dpdk/spdk_pid804528 00:40:18.548 Removing: /var/run/dpdk/spdk_pid804639 00:40:18.548 Removing: /var/run/dpdk/spdk_pid805725 00:40:18.548 Removing: /var/run/dpdk/spdk_pid807016 00:40:18.548 Removing: /var/run/dpdk/spdk_pid808202 00:40:18.548 Removing: /var/run/dpdk/spdk_pid809377 00:40:18.548 Removing: /var/run/dpdk/spdk_pid810564 00:40:18.548 Removing: /var/run/dpdk/spdk_pid811856 00:40:18.548 Removing: /var/run/dpdk/spdk_pid815784 00:40:18.548 Removing: /var/run/dpdk/spdk_pid816283 00:40:18.548 Removing: /var/run/dpdk/spdk_pid818121 00:40:18.548 Removing: /var/run/dpdk/spdk_pid818971 00:40:18.548 Removing: /var/run/dpdk/spdk_pid822951 00:40:18.548 Removing: /var/run/dpdk/spdk_pid825052 00:40:18.548 Removing: /var/run/dpdk/spdk_pid828743 00:40:18.548 Removing: /var/run/dpdk/spdk_pid832456 00:40:18.548 Removing: /var/run/dpdk/spdk_pid839065 00:40:18.548 Removing: /var/run/dpdk/spdk_pid843805 00:40:18.548 Removing: /var/run/dpdk/spdk_pid843812 00:40:18.548 Removing: /var/run/dpdk/spdk_pid857018 00:40:18.548 Removing: /var/run/dpdk/spdk_pid857679 00:40:18.548 Removing: /var/run/dpdk/spdk_pid858347 00:40:18.548 Removing: /var/run/dpdk/spdk_pid858985 00:40:18.548 Removing: /var/run/dpdk/spdk_pid859992 00:40:18.548 Removing: /var/run/dpdk/spdk_pid860544 00:40:18.548 Removing: /var/run/dpdk/spdk_pid861200 00:40:18.548 Removing: /var/run/dpdk/spdk_pid861861 00:40:18.548 Removing: /var/run/dpdk/spdk_pid864627 00:40:18.548 Removing: /var/run/dpdk/spdk_pid865020 00:40:18.548 Removing: /var/run/dpdk/spdk_pid869063 00:40:18.548 Removing: /var/run/dpdk/spdk_pid869250 00:40:18.548 Removing: /var/run/dpdk/spdk_pid870986 00:40:18.548 Removing: /var/run/dpdk/spdk_pid876406 00:40:18.548 Removing: /var/run/dpdk/spdk_pid876417 00:40:18.548 Removing: /var/run/dpdk/spdk_pid879491 00:40:18.548 Removing: /var/run/dpdk/spdk_pid881579 00:40:18.548 Removing: /var/run/dpdk/spdk_pid883092 00:40:18.548 Removing: /var/run/dpdk/spdk_pid884071 00:40:18.548 Removing: /var/run/dpdk/spdk_pid885604 00:40:18.548 Removing: /var/run/dpdk/spdk_pid886597 00:40:18.548 Removing: /var/run/dpdk/spdk_pid892247 00:40:18.548 Removing: /var/run/dpdk/spdk_pid892636 00:40:18.548 Removing: /var/run/dpdk/spdk_pid893034 00:40:18.548 Removing: /var/run/dpdk/spdk_pid894933 00:40:18.548 Removing: /var/run/dpdk/spdk_pid895331 00:40:18.548 Removing: /var/run/dpdk/spdk_pid895732 00:40:18.548 Removing: /var/run/dpdk/spdk_pid898050 00:40:18.548 Removing: /var/run/dpdk/spdk_pid898186 00:40:18.548 Removing: /var/run/dpdk/spdk_pid899783 00:40:18.548 Removing: /var/run/dpdk/spdk_pid900536 00:40:18.548 Removing: /var/run/dpdk/spdk_pid900681 00:40:18.548 Clean 00:40:18.548 05:29:25 -- common/autotest_common.sh@1451 -- # return 0 00:40:18.548 05:29:25 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:18.548 05:29:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:18.548 05:29:25 -- common/autotest_common.sh@10 -- # set +x 00:40:18.806 05:29:25 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:18.806 05:29:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:18.806 05:29:25 -- common/autotest_common.sh@10 -- # set +x 00:40:18.806 05:29:25 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:18.806 05:29:25 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:18.806 05:29:25 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:18.806 05:29:25 -- spdk/autotest.sh@391 -- # hash lcov 00:40:18.806 05:29:25 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:18.806 05:29:25 -- spdk/autotest.sh@393 -- # hostname 00:40:18.806 05:29:25 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:18.806 geninfo: WARNING: invalid characters removed from testname! 00:40:45.387 05:29:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:50.662 05:29:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:53.944 05:30:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:58.125 05:30:03 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:00.649 05:30:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:03.932 05:30:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:08.110 05:30:13 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:08.110 05:30:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:08.110 05:30:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:08.110 05:30:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:08.110 05:30:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:08.110 05:30:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.111 05:30:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.111 05:30:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.111 05:30:13 -- paths/export.sh@5 -- $ export PATH 00:41:08.111 05:30:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.111 05:30:13 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:08.111 05:30:13 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:08.111 05:30:13 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720841413.XXXXXX 00:41:08.111 05:30:13 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720841413.pCQICb 00:41:08.111 05:30:13 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:08.111 05:30:13 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:08.111 05:30:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:08.111 05:30:13 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:08.111 05:30:13 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:08.111 05:30:13 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:08.111 05:30:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:08.111 05:30:13 -- common/autotest_common.sh@10 -- $ set +x 00:41:08.111 05:30:13 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:41:08.111 05:30:13 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:08.111 05:30:13 -- pm/common@17 -- $ local monitor 00:41:08.111 05:30:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.111 05:30:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.111 05:30:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.111 05:30:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.111 05:30:13 -- pm/common@21 -- $ date +%s 00:41:08.111 05:30:13 -- pm/common@21 -- $ date +%s 00:41:08.111 05:30:13 -- pm/common@25 -- $ sleep 1 00:41:08.111 05:30:13 -- pm/common@21 -- $ date +%s 00:41:08.111 05:30:13 -- pm/common@21 -- $ date +%s 00:41:08.111 05:30:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720841413 00:41:08.111 05:30:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720841413 00:41:08.111 05:30:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720841413 00:41:08.111 05:30:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720841413 00:41:08.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720841413_collect-vmstat.pm.log 00:41:08.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720841413_collect-cpu-load.pm.log 00:41:08.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720841413_collect-cpu-temp.pm.log 00:41:08.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720841413_collect-bmc-pm.bmc.pm.log 00:41:08.379 05:30:14 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:08.379 05:30:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:41:08.379 05:30:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:08.379 05:30:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:08.379 05:30:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:41:08.379 05:30:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:08.379 05:30:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:08.380 05:30:14 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:08.380 05:30:14 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:08.380 05:30:14 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:08.380 05:30:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:08.380 05:30:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:08.380 05:30:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:08.380 05:30:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:08.380 05:30:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.380 05:30:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:08.380 05:30:14 -- pm/common@44 -- $ pid=913798 00:41:08.380 05:30:14 -- pm/common@50 -- $ kill -TERM 913798 00:41:08.380 05:30:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.380 05:30:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:08.380 05:30:14 -- pm/common@44 -- $ pid=913800 00:41:08.380 05:30:14 -- pm/common@50 -- $ kill -TERM 913800 00:41:08.380 05:30:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.380 05:30:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:08.380 05:30:14 -- pm/common@44 -- $ pid=913802 00:41:08.380 05:30:14 -- pm/common@50 -- $ kill -TERM 913802 00:41:08.380 05:30:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:08.380 05:30:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:08.380 05:30:14 -- pm/common@44 -- $ pid=913829 00:41:08.380 05:30:14 -- pm/common@50 -- $ sudo -E kill -TERM 913829 00:41:08.639 + [[ -n 461443 ]] 00:41:08.639 + sudo kill 461443 00:41:08.646 [Pipeline] } 00:41:08.660 [Pipeline] // stage 00:41:08.664 [Pipeline] } 00:41:08.679 [Pipeline] // timeout 00:41:08.683 [Pipeline] } 00:41:08.698 [Pipeline] // catchError 00:41:08.702 [Pipeline] } 00:41:08.717 [Pipeline] // wrap 00:41:08.721 [Pipeline] } 00:41:08.734 [Pipeline] // catchError 00:41:08.741 [Pipeline] stage 00:41:08.743 [Pipeline] { (Epilogue) 00:41:08.754 [Pipeline] catchError 00:41:08.756 [Pipeline] { 00:41:08.768 [Pipeline] echo 00:41:08.769 Cleanup processes 00:41:08.773 [Pipeline] sh 00:41:09.048 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:09.048 913942 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:41:09.048 914063 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:09.062 [Pipeline] sh 00:41:09.339 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:09.339 ++ grep -v 'sudo pgrep' 00:41:09.339 ++ awk '{print $1}' 00:41:09.339 + sudo kill -9 913942 00:41:09.351 [Pipeline] sh 00:41:09.630 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:19.595 [Pipeline] sh 00:41:19.900 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:19.901 Artifacts sizes are good 00:41:19.915 [Pipeline] archiveArtifacts 00:41:19.921 Archiving artifacts 00:41:20.119 [Pipeline] sh 00:41:20.397 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:20.411 [Pipeline] cleanWs 00:41:20.419 [WS-CLEANUP] Deleting project workspace... 00:41:20.419 [WS-CLEANUP] Deferred wipeout is used... 00:41:20.428 [WS-CLEANUP] done 00:41:20.430 [Pipeline] } 00:41:20.450 [Pipeline] // catchError 00:41:20.462 [Pipeline] sh 00:41:20.739 + logger -p user.info -t JENKINS-CI 00:41:20.748 [Pipeline] } 00:41:20.765 [Pipeline] // stage 00:41:20.772 [Pipeline] } 00:41:20.788 [Pipeline] // node 00:41:20.793 [Pipeline] End of Pipeline 00:41:20.824 Finished: SUCCESS